Grid Computing seminars report
#1

[attachment=1005][attachment=932]
ABSTRACT
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently Grid computing.
The early efforts in Grid computing started as a project to page link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology

INTRODUCTION
The popularity of the Internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today. These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing. The term Grid is chosen as an analogy to a power Grid that provides consistent, pervasive, dependable, transparent access to electricity irrespective of its source. A detailed analysis of this analogy can be found in. This new approach to network computing is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to- peer (P2P) computing.

Figure 1. Towards Grid computing: a conceptual view.

Grids enable the sharing, selection, and aggregation of a wide variety of resources including supercomputers, storage systems, data sources, and specialized devices (see Figure 1)that are geographically distributed and owned by different organizations for solving large-scale computational and data intensive problems in science, engineering, and commerce. Thus creating virtual organizations and enterprises as a temporary alliance of enterprises or organizations that come together to share resources and skills, core competencies, or resources in order to better respond to business opportunities or large-scale application processing requirements, and whose cooperation is supported by computer networks.
The concept of Grid computing started as a project to page link geographically dispersed supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure can benefit many applications, including collaborative engineering, data exploration, high-throughput computing, and distributed supercomputing.
A Grid can be viewed as a seamless, integrated computational and collaborative environment (see Figure 1). The users interact with the Grid resource broker to solve problems, which in turn performs resource discovery, scheduling, and the processing of application jobs on the distributed Grid resources. From the end-user point of view, Grids can be used to provide the following types of services.
¢Computational services. These are concerned with providing secure services for executing application jobs on distributed computational resources individually or collectively. Resources brokers provide the services for collective use of distributed resources. A Grid providing computational services is often called a computational Grid. Some examples of computational Grids are: NASA IPG, the World Wide Grid, and the NSF TeraGrid .
¢Data services. These are concerned with proving secure access to distributed datasets and their management. To provide a scalable storage and access to the data sets, they may be replicated, catalogued, and even different datasets stored in different locations to create an illusion of mass storage. The processing of datasets is carried out using computational Grid services and such a combination is commonly called data Grids. Sample applications that need such services for management, sharing, and processing of large datasets are high-energy physics and accessing distributed chemical databases for drug design.
¢Application services. These are concerned with application management and providing access to remote software and libraries transparently. The emerging technologies such as Web services are expected to play a leading role in defining application services. They build on computational and data services provided by the Grid. An example system that can be used to develop such services is NetSolve.
¢Information services. These are concerned with the extraction and presentation of data with meaning by using the services of computational, data, and/or application services. The low-level details handled by this are the way that information is represented, stored, accessed, shared, and maintained. Given its key role in many scientific endeavors, the Web is the obvious point of departure for this level.
¢Knowledge services. These are concerned with the way that knowledge is acquired, used, retrieved, published, and maintained to assist users in achieving their particular goals and objectives. Knowledge is understood as information applied to achieve a goal, solve a problem, or execute a decision. An example of this is data mining for automatically building a new knowledge.
To build a Grid, the development and deployment of a number of services is required. These include security, information, directory, resource allocation, and payment mechanisms in an open environment and high-level services for application development, execution management, resource aggregation, and scheduling.
Grid applications (typically multidisciplinary and large-scale processing applications) often couple resources that cannot be replicated at a single site, or which may be globally located for other practical reasons. These are some of the driving forces behind the foundation of global Grids. In this light, the Grid allows users to solve larger or new problems by pooling together resources that could not be easily coupled before. Hence, the Grid is not only a computing infrastructure, for large applications, it is a technology that can bond and unify remote and diverse distributed resources ranging from meteorological sensors to data vaults and from parallel supercomputers to personal digital organizers. As such, it will provide pervasive services to all users that need them.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in this area.
Benefits of Grid Computing
Grid computing can provide many benefits not available with traditional computing models:
¢ Better utilization of resources ” Grid computing uses distributed resources more efficiently and delivers more usable computing power. This can decrease time-to-market, allow for innovation, or enable additional testing and simulation for improved product quality. By employing existing resources, grid computing helps protect IT investments, containing costs while providing more capacity.
¢ Increased user productivity ” By providing transparent access to resources, work can be completed more quickly. Users gain additional productivity as they can focus on design and development rather than wasting valuable time hunting for resources and manually scheduling and managing large numbers of jobs.
¢ Scalability ” Grids can grow seamlessly over time, allowing many thousands of processors to be integrated into one cluster. Components can be updated independently and additional resources can be added as needed, reducing large one-time expenses.
¢ Flexibility ” Grid computing provides computing power where it is needed most, helping to better meet dynamically changing work loads. Grids can contain heterogeneous compute nodes, allowing resources to be added and removed as needs dictate.
Levels of Deployment
Grid computing can be divided into three logical levels of deployment: Cluster Grids, Enterprise Grids, and Global Grids.
¢ Cluster Grids
The simplest form of a grid, a Cluster Grid consists of multiple systems interconnected through a network. Cluster Grids may contain distributed workstations and servers, as well as centralized resources in a datacenter environment. Typically owned and used by a single project or department, Cluster Grids support both high throughput and high performance jobs. Common examples of the Cluster Grid architecture include compute farms, groups of multi-processor HPC systems, Beowulf clusters, and networks of
workstations (NOW).
¢ Enterprise Grids
As capacity needs increase, multiple Cluster Grids can be combined into an Enterprise Grid. Enterprise Grids enable multiple projects or departments to share computing resources in a cooperative way. Enterprise Grids typically contain resources from multiple administrative domains, but are located in the same geographic location.
¢ Global Grids
Global Grids are a collection of Enterprise Grids, all of which have agreed upon global usage policies and protocols, but not necessarily the same implementation. Computing resources may be geographically dispersed, connecting sites around the globe. Designed to support and address the needs of multiple sites and organizations sharing resources, Global Grids provide the power of distributed resources to users anywhere in the world.

Figure 2 Three levels of grid computing: cluster, enterprise, and global grids.
GRID CONSTRUCTION: GENERAL PRINCIPLES
This section briefly highlights some of the general principles that underlie the construction of the Grid. In particular, the idealized design features that are required by a Grid to provide users with a seamless computing environment are discussed. Four main aspects characterize a Grid.
¢Multiple administrative domains and autonomy. Grid resources are geographically distributed across multiple administrative domains and owned by different organizations. The autonomy of resource owners needs to be honored along with their local resource management and usage policies.
¢Heterogeneity. A Grid involves a multiplicity of resources that are heterogeneous in nature and will encompass a vast range of technologies.
¢Scalability. A Grid might grow from a few integrated resources to millions. This raises the problem of potential performance degradation as the size of Grids increases. Consequently, applications that require a large number of geographically located resources must be designed to be latency and bandwidth tolerant.
¢Dynamicity or adaptability. In a Grid, resource failure is the rule rather than the exception. In fact, with so many resources in a Grid, the probability of some resource failing is high. Resource managers or applications must tailor their behavior dynamically and use the available resources and services efficiently and effectively.
Design Features
The following are the main design features required by a Grid environment.
¢Administrative hierarchy. An administrative hierarchy is the way that each Grid environment divides itself up to cope with a potentially global extent. The administrative hierarchy determines how administrative information flows through the Grid.
¢Communication services. The communication needs of applications using a Grid environment are diverse, ranging from reliable point-to-point to unreliable multicast communications. The communications infrastructure needs to support protocols that are used for bulk-data transport, streaming data, group communications, and those used by distributed objects. The network services used also provide the Grid with important QoS parameters such as latency, bandwidth, reliability, fault-tolerance, and jitter control.
¢Information services. A Grid is a dynamic environment where the location and types of services available are constantly changing. A major goal is to make all resources accessible to any process in the system, without regard to the relative location of the resource user. It is necessary to provide mechanisms to enable a rich environment in which information is readily obtained by requesting services. The Grid information (registration and directory) services components provide the mechanisms for registering and obtaining information about the Grid structure, resources, services, and status.
¢Naming services. In a Grid, like in any distributed system, names are used to refer to a wide variety of objects such as computers, services, or data objects. The naming service provides a uniform name space across the complete Grid environment. Typical naming services are provided by the international X.500 naming scheme or DNS, the Internet™s scheme.
¢Distributed file systems and caching. Distributed applications, more often than not, require access to files distributed among many servers. A distributed file system is therefore a key component in a distributed system. From an applications point of view it is important that a distributed file system can provide a uniform global namespace, support a range of file I/O protocols, require little or no program modification, and provide means that enable performance optimizations to be implemented, such as the usage of caches.
¢Security and authorization. Any distributed system involves all four aspects of security: confidentiality, integrity, authentication, and accountability. Security within a Grid environment is a complex issue requiring diverse resources autonomously administered to interact in a manner that does not impact the usability of the resources or introduces security holes/lapses in individual systems or the environments as a whole. A security infrastructure is the key to the success or failure of a Grid environment.
¢System status and fault tolerance. To provide a reliable and robust environment it is important that a means of monitoring resources and applications is provided. To accomplish this task, tools that monitor resources and application need to be deployed.
¢Resource management and scheduling. The management of processor time, memory, network, storage, and other components in a Grid is clearly very important. The overall aims to efficiently and effectively schedule the applications that need to utilize the available resources in the Grid computing environment. From a user™s point of view, resource management and scheduling should be transparent; their interaction with it being confined to a manipulating mechanism for submitting their application. It is important in a Grid that a resource management and scheduling service can interact with those that may be installed locally.
¢Computational economy and resource trading. As a Grid is constructed by coupling resources distributed across various organizations and administrative domains that may be owned by different organizations, it is essential to support mechanisms and policies that help in regulate resource supply and demand. An economic approach is one means of managing resources in a complex and decentralized manner. This approach provides incentives for resource owners, and users to be part of the Grid and develop and using strategies that help maximize their objectives.
¢Programming tools and paradigms. Grid applications (multi-disciplinary applications) couple resources that cannot be replicated at a single site even or may be globally located for other practical reasons. A Grid should include interfaces, APIs, utilities, and tools to provide a rich development environment. Common scientific languages such as C, C++, and Fortran should be available, as should application-level interfaces such as MPI and PVM. A variety of programming paradigms should be supported, such as message passing or distributed shared memory. In addition, a suite of numerical and other commonly used libraries should be available.
¢User and administrative GUI. The interfaces to the services and resources available should be intuitive and easy to use. In addition, they should work on a range of different platforms and operating systems. They also need to take advantage of Web technologies to offer a view of portal supercomputing. The Web-centric approach to access supercomputing resources should enable users to access any resource from anywhere over any platform at any time. That means, the users should be allowed to submit their jobs to computational resources through a Web interface from any of the accessible platforms such as PCs, laptops, or Personal Digital Assistant, thus supporting the ubiquitous access to the Grid. The provision of access to scientific applications through the Web (e.g. RWCPs parallel protein information analysis system) leads to the creation of science portals.
GRID ARCHITECTURE
Our goal in describing our Grid architecture is not to provide a complete enumeration of all required protocols (and services, APIs, and SDKs) but rather to identify requirements for general classes of component. The result is an extensible, open architectural structure within which can be placed solutions to key VO requirements. Our architecture and the subsequent discussion organize components into layers, as shown in Figure. Components within each layer share common characteristics but can build on capabilities and behaviors provided by any lower layer.
In specifying the various layers of the Grid architecture, we follow the principles of the hourglass model. The narrow neck of the hourglass defines a small set of core abstractions and protocols (e.g., TCP and HTTP in the Internet), onto which many different high-level behaviors can be mapped (the top of the hourglass), and which themselves can be mapped onto many different underlying technologies (the base of the hourglass). By definition, the number of protocols defined at the neck must be small. In our architecture, the neck of the hourglass consists of Resource and Connectivity protocols, which facilitate the sharing of individual resources. Protocols at these layers are designed so that they can be implemented on top of a diverse range of resource types, defined at the Fabric layer, and can in turn be used to construct a wide range of global services and application-specific behaviors at the Collective layer”so called because they involve the coordinated (collective) use of multiple resources.

Figure3. The layered Grid architecture and its relationship to the Internet protocol architecture. Because the Internet protocol architecture extends from network to application, there is a mapping from Grid layers into Internet layers.
Fabric: Interfaces to Local Control
The Grid Fabric layer provides the resources to which shared access is mediated by Grid protocols: for example, computational resources, storage systems, catalogs, network resources, and sensors. A resource may be a logical entity, such as a distributed file system, computer cluster, or distributed computer pool; in such cases, a resource implementation may involve internal protocols (e.g., the NFS storage access protocol or a cluster resource management systemâ„¢s process management protocol), but these are not the concern of Grid architecture.
Fabric components implement the local, resource-specific operations that occur on specific resources (whether physical or logical) as a result of sharing operations at higher levels. There is thus a tight and subtle interdependence between the functions implemented at the Fabric level, on the one hand, and the sharing operations supported, on the other. Richer Fabric functionality enables more sophisticated sharing operations; at the same time, if we place few demands on Fabric elements, then deployment of Grid infrastructure is simplified. For example, resource-level support for advance reservations makes it possible for higher-level services to aggregate (coschedule) resources in interesting ways that would otherwise be impossible to achieve. However, as in practice few resources support advance reservation out of the box, a requirement for advance reservation increases the cost of incorporating new resources into a Grid.
Experience suggests that at a minimum, resources should implement enquiry mechanisms that permit discovery of their structure, state, and capabilities (e.g., whether they support advance reservation) on the one hand, and resource management mechanisms that provide some control of delivered quality of service, on the other. The following brief and partial list provides a resource-specific characterization of capabilities.
Computational resources: Mechanisms are required for starting programs and for monitoring and controlling the execution of the resulting processes. Management mechanisms that allow control over the resources allocated to processes are useful, as are advance reservation mechanisms. Enquiry functions are needed for determining hardware and software characteristics as well as relevant state information such as current load and queue state in the case of scheduler-managed resources.
Storage resources: Mechanisms are required for putting and getting files. Third-party and high-performance (e.g., striped) transfers are useful. So are mechanisms for reading and writing subsets of a file and/or executing remote data selection or reduction functions. Management mechanisms that allow control over the resources allocated to data transfers (space, disk bandwidth, network bandwidth, CPU) are useful, as are advance reservation mechanisms. Enquiry functions are needed for determining hardware and software characteristics as well as relevant load information such as available space and bandwidth utilization.
Network resources: Management mechanisms that provide control over the resources allocated to network transfers (e.g., prioritization, reservation) can be useful. Enquiry functions should be provided to determine network characteristics and load.
Code repositories: This specialized form of storage resource requires mechanisms for managing versioned source and object code: for example, a control system such as CVS.
Catalogs: This specialized form of storage resource requires mechanisms for implementing catalog query and update operations: for example, a relational database.
Connectivity: Communicating Easily and Securely
The Connectivity layer defines core communication and authentication protocols required for Grid-specific network transactions. Communication protocols enable the exchange of data between Fabric layer resources. Authentication protocols build on communication services to provide cryptographically secure mechanisms for verifying the identity of users and resources.
Communication requirements include transport, routing, and naming. While alternatives certainly exist, we assume here that these protocols are drawn from the TCP/IP protocol stack: specifically, the Internet (IP and ICMP), transport (TCP, UDP), and application (DNS, OSPF, RSVP, etc.) layers of the Internet layered protocol architecture. This is not to say that in the future, Grid communications will not demand new protocols that take into account particular types of network dynamics.
With respect to security aspects of the Connectivity layer, we observe that the complexity of the security problem makes it important that any solutions be based on existing standards whenever possible. As with communication, many of the security standards developed within the context of the Internet protocol suite are applicable.
Authentication solutions for VO environments should have the following characteristics:
Single sign on. Users must be able to log on (authenticate) just once and then have access to multiple Grid resources defined in the Fabric layer, without further user intervention.
Delegation. A user must be able to endow a program with the ability to run on that userâ„¢s behalf, so that the program is able to access the resources on which the user is authorized. The program should (optionally) also be able to conditionally delegate a subset of its rights to another program (sometimes referred to as restricted delegation).
Integration with various local security solutions: Each site or resource provider may employ any of a variety of local security solutions, including Kerberos and Unix security. Grid security solutions must be able to interoperate with these various local solutions. They cannot, realistically, require wholesale replacement of local security solutions but rather must allow mapping into the local environment.
User-based trust relationships: In order for a user to use resources from multiple providers together, the security system must not require each of the resource providers to cooperate or interact with each other in configuring the security environment. For example, if a user has the right to use sites A and B, the user should be able to use sites A and B together without requiring that Aâ„¢s and Bâ„¢s security administrators interact.

Resource: Sharing Single Resources
The Resource layer builds on Connectivity layer communication and authentication protocols to define protocols (and APIs and SDKs) for the secure negotiation, initiation, monitoring, control, accounting, and payment of sharing operations on individual resources. Resource layer implementations of these protocols call Fabric layer functions to access and control local resources. Resource layer protocols are concerned entirely with individual resources and hence ignore issues of global state and atomic actions across distributed collections; such issues are the concern of the Collective layer discussed next.
Two primary classes of Resource layer protocols can be distinguished:
Information protocols are used to obtain information about the structure and state of a resource, for example, its configuration, current load, and usage policy (e.g., cost).
Management protocols are used to negotiate access to a shared resource, specifying, for example, resource requirements (including advanced reservation and quality of service) and the operation(s) to be performed, such as process creation, or data access. Since management protocols are responsible for instantiating sharing relationships, they must serve as a policy application point, ensuring that the requested protocol operations are consistent with the policy under which the resource is to be shared. Issues that must be considered include accounting and payment. A protocol may also support monitoring the status of an operation and controlling (for example, terminating) the operation.

Collective: Coordinating Multiple Resources
While the Resource layer is focused on interactions with a single resource, the next layer in the architecture contains protocols and services (and APIs and SDKs) that are not associated with any one specific resource but rather are global in nature and capture interactions across collections of resources. For this reason, we refer to the next layer of the architecture as the Collective layer. Because Collective components build on the narrow Resource and Connectivity layer neck in the protocol hourglass, they can implement a wide variety of sharing behaviors without placing new requirements on the resources being shared. For example:
Directory services allow VO participants to discover the existence and/or properties of VO resources. A directory service may allow its users to query for resources by name and/or by attributes such as type, availability, or load. Resource-level GRRP and GRIP protocols are used to construct directories.
Co-allocation, scheduling, and brokering services allow VO participants to request the allocation of one or more resources for a specific purpose and the scheduling of tasks on the appropriate resources. Examples include AppLeS, Condor-G, Nimrod-G, and the DRM broker .
Monitoring and diagnostics services support the monitoring of VO resources for failure, adversarial attack (intrusion detection), overload, and so forth.
Data replication services support the management of VO storage (and perhaps also network and computing) resources to maximize data access performance with respect to metrics such as response time, reliability, and cost.
Grid-enabled programming systems enable familiar programming models to be used in Grid environments, using various Grid services to address resource discovery, security, resource allocation, and other concerns. Examples include Grid-enabled implementations of the Message Passing Interface and manager-worker frameworks.
Workload management systems and collaboration frameworks”also known as problem solving environments (PSEs)”provide for the description, use, and management of multi-step, asynchronous, multi-component workflows
Software discovery services discover and select the best software implementation and execution platform based on the parameters of the problem being solved. Examples include NetSolve and Ninf.
Community authorization servers enforce community policies governing resource access, generating capabilities that community members can use to access community resources. These servers provide a global policy enforcement service by building on resource information, and resource management protocols (in the Resource layer) and security protocols in the Connectivity layer. Akenti addresses some of these issues.
Community accounting and payment services gather resource usage information for the purpose of accounting, payment, and/or limiting of resource usage by community members.
Collaboratory services support the coordinated exchange of information within potentially large user communities, whether synchronously or asynchronously. Examples are CAVERNsoft, Access Grid, and commodity groupware systems.
These examples illustrate the wide variety of Collective layer protocols and services that are encountered in practice. Notice that while Resource layer protocols must be general in nature and are widely deployed, Collective layer protocols span the spectrum from general purpose to highly application or domain specific, with the latter existing perhaps only within specific VOs.
Collective functions can be implemented as persistent services, with associated protocols, or as SDKs (with associated APIs) designed to be linked with applications. In both cases, their implementation can build on Resource layer (or other Collective layer) protocols and APIs. For example, Figure shows a Collective co-allocation API and SDK (the middle tier) that uses a Resource layer management protocol to manipulate underlying resources. Above this, we define a co-reservation service protocol and implement a co-reservation service that speaks this protocol, calling the co-allocation API to implement co-allocation operations and perhaps providing additional functionality, such as authorization, fault tolerance, and logging. An application might then use the co-reservation service protocol to request end-to-end network reservations.

Figure4. Collective and Resource layer protocols, services, APIs, and SDKS can be combined in a variety of ways to deliver functionality to applications.
Collective components may be tailored to the requirements of a specific user community, VO, or application domain, for example, an SDK that implements an application-specific coherency protocol, or a co-reservation service for a specific set of network resources. Other Collective components can be more general-purpose, for example, a replication service that manages an international collection of storage systems for multiple communities, or a directory service designed to enable the discovery of VOs. In general, the larger the target user community, the more important it is that a Collective componentâ„¢s protocol(s) and API(s) be standards based.
Applications
The final layer in our Grid architecture comprises the user applications that operate within a VO environment. Figure illustrates an application programmerâ„¢s view of Grid architecture. Applications are constructed in terms of, and by calling upon, services defined at any layer. At each layer, we have well-defined protocols that provide access to some useful service: resource management, data access, resource discovery, and so forth. At each layer, APIs may also be defined whose implementation (ideally provided by third-party SDKs) exchange protocol messages with the appropriate service(s) to perform desired actions.

Figure5. APIs are implemented by software development kits (SDKs), which in turn use Grid protocols to interact with network services that provide capabilities to the end user. Higher level SDKs can provide functionality that is not directly mapped to a specific protocol, but may combine protocol operations with calls to additional APIs as well as implement local functionality. Solid lines represent a direct call; dash lines protocol interactions.
We emphasize that what we label applications and show in a single layer in Figure 4 may in practice call upon sophisticated frameworks and libraries (e.g., the Common Component Architecture , SciRun , CORBA , Cactus, workflow systems) and feature much internal structure that would, if captured in our figure, expand it out to many times its current size. These frameworks may themselves define protocols, services, and/or APIs. (E.g., the Simple Workflow Access Protocol .) However, these issues are beyond the scope of this article, which addresses only the most fundamental protocols and services required in a Grid.

GRID APPLICATIONS
What types of applications will grids are used for Building on experiences in gigabit testbeds, the I-WAY network, and other experimental systems, we have identified five major application classes for computational grids, and described briefly in this section. More details about applications and their technical requirements are provided in the referenced chapters.
Distributed Supercomputing
Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system. Depending on the grid on which we are working, these aggregated resources might comprise the majority of the supercomputers in the country or simply all of the workstations within a company. Here are some contemporary examples:
Distributed interactive simulation (DIS) is a technique used for training and planning in the military. Realistic scenarios may involve hundreds of thousands of entities, each with potentially complex behavior patterns. Yet even the largest current supercomputers can handle at most 20,000 entities. In recent work, researchers at the California Institute of Technology have shown how multiple supercomputers can be coupled to achieve record-breaking levels of performance.
The accurate simulation of complex physical processes can require high spatial and temporal resolution in order to resolve fine-scale detail. Coupled supercomputers can be used in such situations to overcome resolution barriers and hence to obtain qualitatively new scientific results. Although high latencies can pose significant obstacles, coupled supercomputers have been used successfully in cosmology, high-resolution abinitio computational chemistry computations, and climate modeling.
Challenging issues from a grid architecture perspective include the need to co schedule what are often scarce and expensive resources, the scalability of protocols and algorithms to tens or hundreds of thousands of nodes, latency-tolerant algorithms, and achieving and maintaining high levels of performance across heterogeneous systems.
High-Throughput Computing
In high-throughput computing, the grid is used to schedule large numbers of loosely coupled or independent tasks, with the goal of putting unused processor cycles (often from idle workstations) to work. The result may be, as in distributed supercomputing, the focusing of available resources on a single problem, but the quasi-independent nature of the tasks involved leads to very different types of problems and problem-solving methods. Here are some examples:
¢ Platform Computing Corporation reports that the microprocessor manufacturer Advanced Micro Devices used high-throughput computing techniques to exploit over a thousand computers during the peak design phases of their K6 and K7 microprocessors. These computers are located on the desktops of AMD engineers at a number of AMD sites and were used for design verification only when not in use by engineers.
¢ The Condor system from the University of Wisconsin is used to manage pools of hundreds of workstations at universities and laboratories around the world. These resources have been used for studies as diverse as molecular simulations of liquid crystals, studies of ground penetrating radar, and the design of diesel engines.
¢ More loosely organized efforts have harnessed tens of thousands of computers distributed world wide to tackle hard cryptographic problems.
On-Demand Computing
On-demand applications use grid capabilities to meet short-term requirements for resources that cannot be cost effectively or conveniently located locally. These resources may be computation, software, data repositories, specialized sensors, and so on. In contrast to distributed supercomputing applications, these applications are often driven by cost-performance concerns rather than absolute performance. For example:
¢ The NEOS and NetSolve network-enhanced numerical solver systems allow users to couple remote software and resources into desktop applications, dispatching to remote servers calculations that are computationally demanding or that require specialized software.
¢ A computer-enhanced MRI machine and scanning tunneling microscope (STM) developed at the National Center for Supercomputing Applications use supercomputers to achieve real time image processing. The result is a significant enhancement in the ability to understand what we are seeing and, in the case of the microscope, to steer the instrument.
¢ A system developed at the Aerospace Corporation for processing of data from meteorological satellites uses dynamically acquired supercomputer resources to deliver the results of a cloud detection algorithm to remote meteorologists in quasi real time.
The challenging issues in on-demand applications derive primarily from the dynamic nature of resource requirements and the potentially large populations of users and resources. These issues include resource location, scheduling, code management, configuration, fault tolerance, security, and payment mechanisms.
Data-Intensive Computing
In data-intensive applications, the focus is on synthesizing new information from data that is maintained in geographically distributed repositories, digital libraries, and databases. This synthesis process is often computationally and communication intensive as well.

¢ Future high-energy physics experiments will generate terabytes of data per day, or around a peta byte per year. The complex queries used to detect interesting" events may need to access large fractions of this data. The scientific collaborators who will access this data are widely distributed, and hence the data systems in which data is placed are likely to be distributed as well.
¢ The Digital Sky Survey will, ultimately, make many terabytes of astronomical photographic data available in numerous network-accessible databases. This facility enables new approaches to astronomical research based on distributed analysis, assuming that appropriate computational grid facilities exist.
¢ Modern meteorological forecasting systems make extensive use of data assimilation to incorporate remote satellite observations. The complete process involves the movement and processing of many gigabytes of data.
Challenging issues in data-intensive applications are the scheduling and configuration of complex, high-volume data flows through multiple levels of hierarchy.

Collaborative Computing
Collaborative applications are concerned primarily with enabling and enhancing human-to-human interactions. Such applications are often structured in terms of a virtual shared space. Many collaborative applications are concerned with enabling the shared use of computational resources such as data archives and simulations; in this case, they also have characteristics of the other application classes just described. For example:
¢ The BoilerMaker system developed at Argonne National Laboratory allows multiple users to collaborate on the design of emission control systems in industrial incinerators. The different users interact with each other and with a simulation of the incinerator.
¢ The CAVE5D system supports remote, collaborative exploration of large geophysical data sets and the models that generate them-for example, a coupled physical/biological model of the Chesapeake Bay.
¢ The NICE system developed at the University of Illinois at Chicago allows children to participate in the creation and maintenance of realistic virtual worlds, for entertainment and education.
Challenging aspects of collaborative applications from a grid architecture perspective are the real- time requirements imposed by human perceptual capabilities and the rich variety of interactions that can take place.
We conclude this section with three general observations. First, we note that even in this brief survey we see a tremendous variety of already successful applications. This rich set has been developed despite the significant difficulties faced by programmers developing grid applications in the absence of a mature grid infrastructure. As grids evolve, we expect the range and sophistication of applications to increase dramatically. Second, we observe that almost all of the applications demonstrate a tremendous appetite for computational resources (CPU, memory, disk, etc.) that cannot be met in a timely fashion by expected growth in single-system performance. This emphasizes the importance of grid technologies as a means of sharing computation as well as a data access and communication medium. Third, we see that many of the applications are interactive, or depend on tight synchronization with computational components, and hence depend on the availability of a grid infrastructure able to provide robust performance guarantees.

CONCLUSIONS AND FUTURE TRENDS
There are currently a large number of projects and a diverse range of new and emerging Grid developmental approaches being pursued. These systems range from Grid frameworks to application testbeds, and from collaborative environments to batch submission mechanisms.
It is difficult to predict the future in a field such as information technology where the technological advances are moving very rapidly. Hence, it is not an easy task to forecast what will become the Ëœdominantâ„¢ Grid approach. Windows of opportunity for ideas and products seem to open and close in the Ëœblink of an eyeâ„¢. However, some trends are evident. One of those is growing interest in the use of Java and Web services for network computing.
The Java programming language successfully addresses several key issues that accelerate the development of Grid environments, such as heterogeneity and security. It also removes the need to install programs remotely; the minimum execution environment is a Java-enabled Web browser. Java, with its related technologies and growing repository of tools and utilities, is having a huge impact on the growth and development of Grid environments. From a relatively slow start, the developments in Grid computing are accelerating fast with the advent of these new and emerging technologies. It is very hard to ignore the presence of the Common Object Request Broker Architecture (CORBA) in the background. We believe that frameworks incorporating CORBA services will be very influential on the design of future Grid environments.
The two other emerging Java technologies for Grid and P2P computing are Jini and JXTA . The Jini architecture exemplifies a network-centric service-based approach to computer systems. Jini replaces the notions of peripherals, devices, and applications with that of network-available services. Jini helps break down the conventional view of what a computer is, while including new classes of services that work together in a federated architecture. The ability to move code from the server to its client is the core difference between the Jini environment and other distributed systems, such as CORBA and the Distributed Common Object Model (DCOM).
Whatever the technology or computing infrastructure that becomes predominant or most popular, it can be guaranteed that at some stage in the future its star will wane. Historically, in the field of computer research and development, this fact can be repeatedly observed. The lesson from this observation must therefore be drawn that, in the long term, backing only one technology can be an expensive mistake. The framework that provides a Grid environment must be adaptable, malleable, and extensible. As technology and fashions change it is crucial that Grid environments evolve with them.
Smarr observes that Grid computing has serious social consequences and is going to have as revolutionary an effect as railroads did in the American Midwest in the early 19th century. Instead of a 30“40 year lead-time to see its effects, however, its impact is going to be much faster. Smarr concludes by noting that the effects of Grids are going to change the world so quickly that mankind will struggle to react and change in the face of the challenges and issues they present. Therefore, at some stage in the future, our computing needs will be satisfied in the same pervasive and ubiquitous manner that we use the electricity power grid. The analogies with the generation and delivery of electricity are hard to ignore, and the implications are enormous. In fact, the Grid is analogous to the electricity (power) Grid and the vision is to offer (almost) dependable, consistent, pervasive, and inexpensive access to resources irrespective of their location for physical existence and their location for access.
BIBLIOGRAPHY
1. Foster, C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, San Francisco, Calif. (1999).
2. Foster. I, Kesselman, C. and Tuecke, S. The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal of High Performance Computing Applications
3. Rajkumar Buyya, Mark Baker. Grids and Grid technologies for wide-area distributed computing ,SP&E.
4. globus.org
5. Ian Foster. The Grid: A New Infrastructure for 21st Century Science, Physics today

ACKNOWLEDGMENT

I express my sincere thanks to Prof. M.N Agnisarman Namboothiri (Head of the Department, Computer Science and Engineering, MESCE),
Mr. Sminesh (Staff incharge) for their kind co-operation for presenting the seminars.
I also extend my sincere thanks to all other members of the faculty of Computer Science and Engineering Department and my friends for their co-operation and encouragement.
ABDUL HASEEB K


CONTENTS
1. INTRODUCTION 1
¢ Benefits of Grid Computing 4
¢ Levels of Deployment 5
2. GRID CONSTRUCTION: GENERAL PRINCIPLES 7
¢ Design Features 8
3. GRID ARCHITECTURE 11
¢ Fabric: Interfaces to Local Control 12
¢ Connectivity: Communicating Easily and Securely 14
¢ Resource: Sharing Single Resources 16
¢ Collective: Coordinating Multiple Resources 17
¢ Applications 20
4. GRID APPLICATIONS 22
¢ Distributed Supercomputing 22
¢ High-Throughput Computing 23
¢ On-Demand Computing 24
¢ Data-Intensive Computing 25
¢ Collaborative Computing 26
5. CONCLUSIONS AND FUTURE TRENDS 28
6. BIBLIOGRAPHY 30
Reply
#2
[attachment=1749]
[attachment=1750]


GRID COMPUTING
ABSTRACT
Mankind is right in the middle of another evolutionary technological transition, which once more will change the way we do things. And, you guessed right, it has to do with the Internet. Itâ„¢s called "The Grid", which means the infrastructure for the Advanced Web, for computing, collaboration and communication.
Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
This paper aims to present the state-of-the-art concepts of Grid computing. A set of general principles, services and design criteria that can be followed in the Grid construction are given. One of the Grid application project Legion is taken up. We conclude with future trends in this yet to be conquered technology.


INTRODUCTION
The popularity of the Internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today. These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing.
The term Grid is chosen as an analogy to a Power Grid that provides consistent, pervasive, dependable, transparent access to electricity irrespective of its source. This new approach to network computing is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer to peer (P2P) computing.
Grids enable the sharing, selection, and aggregation of a wide variety of resources including supercomputers, storage systems, data sources, and specialized devices that are geographically distributed and owned by different organizations for solving large-scale computational and data intensive problems in science, engineering, and commerce. Thus creating virtual organizations and enterprises as envisioned in as a temporary alliance of enterprises or organizations that come together to share resources and skills, core competencies, or resources in order to better respond to business opportunities or large-scale application processing requirements, and whose cooperation is supported by computer networks.
The concept of Grid computing started as a project to page link geographically dispersed supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure can benefit many applications, including collaborative engineering, data exploration, high-throughput computing, and distributed supercomputing.

Grid Computing: A Conceptual View
SERVICES OFFERED BY GRID:
A Grid can be viewed as a seamless, integrated computational and collaborative environment and a high-level view of activities within the Grid as shown in Figure. The users interact with the Grid resource broker to solve problems, which in turn performs resource discovery, scheduling, and the processing of application jobs on the distributed Grid resources. From the end-user point of view, Grids can be used to provide the following types of services.
Computational services: These are concerned with providing secure services for executing application jobs on distributed computational resources individually or collectively. Resources brokers provide the services for collective use of distributed resources. A Grid providing computational services is often called a Computational Grid. Some examples of computational Grids are: NASA IPG, the World Wide Grid, and the NSF TeraGrid.
Data services: These are concerned with proving secure access to distributed datasets and their management. To provide a scalable storage and access to the data sets, they may be replicated, catalogued, and even different datasets stored in different locations to create an illusion of mass storage. The processing of datasets is carried out using computational Grid services and such a combination is commonly called Data Grids. Sample applications that need such services for management, sharing, and processing of large datasets are high-energy physics and accessing distributed chemical databases for drug design.
Application services: These are concerned with application management and providing access to remote software and libraries transparently. The emerging technologies such as Web services are expected to play a leading role in defining application services. They build on computational and data services provided by the Grid. An example system that can be used to develop such services is NetSolve.
Information services: These are concerned with the extraction and presentation of data with meaning by using the services of computational, data, and/or application services. The low-level details handled by this are the way that information is represented, stored, accessed, shared, and maintained. Given its key role in many scientific endeavors, the Web is the obvious point of departure for this level.
Knowledge services: These are concerned with the way that knowledge is acquired, used, retrieved, published, and maintained to assist users in achieving their particular goals and objectives. Knowledge is understood as information applied to achieve a goal, solve a problem, or execute a decision. An example of this is data mining for automatically building a new knowledge.
GRID CONSTRUCTION
GENERAL PRINCIPLES:
This section briefly highlights some of the general principles that underlie the construction of the Grid. In particular, the idealized design features that are required by a Grid to provide users with a seamless computing environment
Multiple administrative domains and autonomy: Grid resources are geographically distributed across multiple administrative domains and owned by different organizations. The autonomy of resource owners needs to be honored along with their local resource management and usage policies.
Heterogeneity: A Grid involves a multiplicity of resources that are heterogeneous in nature and will encompass a vast range of technologies.
Scalability: A Grid might grow from a few integrated resources to millions. This raises the problem of potential performance degradation as the size of Grids increases. Consequently, applications that require a large number of geographically located resources must be designed to be latency and bandwidth tolerant.
Dynamicity or Adaptability: In a Grid, resource failure is the rule rather than the exception. In fact, with so many resources in a Grid, the probability of some resource failing is high. Resource managers or applications must tailor their behavior dynamically and use the available resources and services efficiently and effectively.
The steps necessary to realize a Grid include:
The integration of individual software and hardware components into a combined networked resource (e.g. a single system image cluster);
The deployment of:
“ low-level middleware to provide a secure and transparent access
to resources;
“ user-level middleware and tools for application development and the
aggregation of distributed resources;
The development and optimization of distributed applications to take advantage of the available resources and infrastructure.
Grid Architecture and Components
The components that are necessary to form a Grid (shown in Figure) are as follows:
Grid fabric: This consists of all the globally distributed resources that are accessible from anywhere on the Internet. These resources could be computers (such as PCs or Symmetric Multi-Processors) running a variety of operating systems (such as UNIX or Windows), storage devices, databases, and special scientific instruments such as a radio telescope or particular heat sensor.
Core Grid middleware: This offers core services such as remote process management, co-allocation of resources, storage access, information registration and discovery, security, and aspects of Quality of Service (Quos) such as resource reservation and trading.
User-level Grid middleware: This includes application development environments, programming tools and resource brokers for managing resources and scheduling application tasks for execution on global resources.
Grid applications and portals: Grid applications are typically developed using Grid-enabled languages and utilities such as HPC++ or MPI. An example application, such as parameter simulation or a grand-challenge problem, would require computational power, access to remote data sets, and may need to interact with scientific instruments. Grid portals offer Web-enabled application services, where users can submit and collect results for their jobs on remote resources through the Web.
Legion An Example Grid Project:
There are many international Grid projects worldwide, which are hierarchically categorized as integrated Grid systems, core middleware, user-level middleware, and applications/application driven efforts. Now we look briefly about a project, which is an object-based metasystem, developed at the University of Virginia.
Legion provides the software infrastructure so that a system of heterogeneous, geographically distributed, high-performance machines can interact seamlessly. Legion attempts to provide users, at their workstations, with a single, coherent, virtual machine.
In the Legion system the following apply.
Everything is an object: Objects represent all hardware and software components. Each object is an active process that responds to method invocations from other objects within the system. Legion defines an API for object interaction, but not the programming language or communication protocol.
Classes manage their instances: Every Legion object is defined and managed by its own active class object. Class objects are given system-level capabilities; they can create new instances, schedule them for execution, activate or deactivate an object, as well as provide state information to client objects.
Users can define their own classes: As in other object-oriented systems users can override or redefine the functionality of a class. This feature allows functionality to be added or removed to meet a userâ„¢s needs.
Legion core objects support the basic services needed by the metasystem. The Legion system supports the following set of core object types.
Classes and metaclasses: Classes can be considered managers and policy makers. Metaclasses are classes of classes.
Host objects: Host objects are abstractions of processing resources; they may represent a single processor or multiple hosts and processors.
Vault objects: Vault objects represent persistent storage, but only for the purpose of maintaining the state of Object Persistent Representation (OPR).
Implementation objects and caches: Implementation objects hide the storage details of object implementations and can be thought of as equivalent to executable files in UNIX. Implementation cache objects provide objects with a cache of frequently used data.
Binding agents: A binding agent maps object IDs to physical addresses. Binding agents can cache bindings and organize themselves into hierarchies and software combining trees.
Context objects and context spaces: Context objects map context names to Legion object IDs, allowing users to name objects with arbitrary-length string names. Context spaces consist of directed graphs of context objects that name and organize information.
CONCLUSION:
The Grid is analogous to the electricity (power) Grid and the vision are to offer (almost) dependable, consistent, pervasive, and inexpensive access to resources irrespective of their location for physical existence and their location for access.
There are currently a large number of projects and a diverse range of new and emerging Grid developmental approaches being pursued. These systems range from Grid frameworks to application testbeds, and from collaborative environments to batch submission mechanisms. It is difficult to predict the future in a field such as information technology where the technological advances are moving very rapidly. Hence, it is not an easy task to forecast what will become the Ëœdominantâ„¢ Grid approach. Windows of opportunity for ideas and products seem to open and close in
the Ëœblink of an eyeâ„¢. However, some trends are evident. One of those is growing interest in the use of Java and Web services for network computing.
BIBILIOGRAPHY
1. Foster I, Kesselman C The Grid: Blueprint for a Future Computing Infrastructure. Morgan Kaufmann: San Francisco, CA, 1999.
2. cs.mu.oz.au
3. gridbus.org
Reply
#3
[attachment=1983]

INTRODUCTION
The demand for computing power continues to grow. Increased network bandwidth, more powerful computers and storage systems, and sophisticated software applications promise solutions to a lot of business and technical computing problems. But harnessing these new abilities means dealing with the challenge of growing workload demands. Organisations face many challenges as they strive to remain competitive. Reduced computing costs, greater throughput, faster time-to-market, and improved quality and innovation are all important. Investments in hardware need to be carefully justified, and organizations must find ways to accomplish more with available resources. Flexibility is key, as enterprises need to handle dynamically changing workloads and quickly provide computing power where it is needed most. Even though the demand for computing resources is great, many existing systems are underutilized. While a few individual serves may be working at capacity, the vast majority of systems are not. As a result, many computing cycles are left unused.
Grid computing enables organizations to use their distributed computing resources more efficiently and flexibly, providing more power out of existing systems and helping organizations gain a competitive business advantages. Grids enable the sharing selection, and aggregation of a wide variety of resources including supercomputer, storage systems, data sources, and specialized devices that are geographically distributed and owned by different organizations for solving large-scale computational and data intensive problems in sciences, engineering, and commerce. Thus enterprises or organizations come together to share resources and skills in order to better respond to business opportunities or large-scale application processing requirements, and whose cooperation is supported by computer networks.
WHAT IS GRID COMPUTING
Distributed Processing Control Data (for batch job servers)
Grid computing is applying the resources of many computers in a network to a single problem at the same time-usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. Grid computing is concerned with coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations. The sharing that we are concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem-solving and resource brokering strategic emerging in industry, science, and engineering.
Other System Data I
¦ database and Internet session control ' Attached Objects
Data Dictionary
' DBMS-independent logical schema 1 indexing metadata
User Administration and Security Data
¦ user it), name, group, log file
¦ batch / interactive modes
¦ full and lightweight users
- associated configurations
- security groups and rules
A Grid is a collection of computing resources connected through network that perform tasks. It appears to users as a large system, providing a single point of access to powerful distributed resources. Grid middleware support a common set aggregates these resources and provides transparent, remote, and secure access to computing power wherever and whenever it is needed i.e., grid computing aggregates resources and delivers computing power to every user in the network. Users treat the Grid as a single computational resource. Users can submit thousands of jobs at a time without being concerned about where they run. Grid computing is currently used in technical computing environments, to provide more resources for compute-intensive tasks.
A computational grid is a hardware and software infrastructure that provides :-:-;endable, consistent, pervasive and inexpensive access to high-end computational capabilities. Grid computing requires the use of software that can divide and farm out Pieces of a program to as many as several thousand computers. Grid computing can be thought of as distributed and large-scale cluster computing and as a form of network dfetributed processing. In short a grid is a system that co-ordinates resources that are not subject to centralized control using standard, open, general-purpose protocols and Interfaces to deliver nontrivial qualities of service.
ORIGIN OF GRID COMPUTING
In the 1980s, the National Science Foundation created the NSFnet: a communications network intended to give scientific researchers easy access to its new supercomputer centers. Very quickly, one smaller network after another linked in-and the result was the internet. The popularity of the internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today. These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing. The term "the Grid" was coined in the mid 1990s to denote a proposed distributed computing infrastructure for advanced science and engineering.
Considerable progress has since been made on the construction of such an infrastructure, but the term "Grid" has also been conflated, at least in popular perception, to embrace everything from advanced networking to artificial intelligence.
DIFFERENT TYPES OF GRID
No two grids are alike. Several distinct types of grids are commonly used in industry today. Three of the most common are compute grids, data grids and access grids:
Compute grid - distributed compute resources consisting of desktop, server, and High Performance Computing (HPC) systems. Computing resources that are managed and collectively made available to meet an organizations computing needs, computational grids enable sharing, selection, and aggregation of geographically distributed resources, such as computers, data sources, and scientific instruments, for solving large-scale problems in science, engineering and commerce.

Access
National/
international
Grids
Access grid - distributed audio-visual equipment, such as cameras, microphones, speakers, and video screens, set up to provide a virtual collective presentation room.
204.121.50 Subnet Network Flowchart
agvid*o2.«U*<il.go»
20«.l 21 .$0.3,0 iixl<s>t»j 2 .» cU*svl.0fe v
J04.W1.S0.22
DiiHtSwtw) PowerPoint 60 IC-jntiofl8 5001 f««»nMufiKi.il UDP S«46*
Ihih TCP 9997 NTPTCPVDP UJ SSH TCP7UDP 22
MUDTCP7777 KTPTCP.UDP 12 J \
RAT. UDP, PORT mutt b#*v«n and :¦ S0000 B»«on Mtrit kdii UOP 56464
Iflrtu^ Vwun HTTP TCP tt> VI*C TCP SS00, S90Q J>
Im on TCP 9997 SSHTCPWW22 X
Bwkoo TCP 9997 NIP TCP.WOP 121 SSHKP/UOP22
JUnfcwt
Data grid - distributed storage devices including disk and tape devices along with the necessary software to migrate data as needed.
LEVELS OF DEPLOYMENT
Grid computing can be divided into three logical levels of deployment: cluster grids, campus grid and global grids as shown in figure.
Cluster Grids are the simplest, consisting of one or more systems interconnected through a network working together to provide a single point of access to users in a single department. Cluster Grids may contain distributed workstations and servers, as well as centralized resources in a datacenter environment. Typically owned and used by a single project or department, Cluster Grids support both high throughput and high performance jobs. Common examples of the Cluster Grid architecture include compute farms, groups of multi-processor HPC systems etc.
Campus Grids: As capacity needs increase, multiple cluster grids can be combined into a campus grid which enables multiple departments within an organization to share computing resources. Organizations can use campus grids to handle a wide variety of tasks. Campus grids typically contain resources from multiple administrative domains, but are located in the same geographic location.
Global grids are a collection of campus grids that cross organizational boundaries to create very large virtual systems. Users have access to compute power that far exceeds the resources available within their own organization. Computing resources may be geographically dispersed, connecting sites around the globe. Designed to support and address the needs of multiple sites and organizations sharing resources, global provide the power of distributed resources to users anywhere in the world.
THE GRID ARCHITECTURE
> Three tier system architecture
> Layered architecture
THE THREE TIER SYSTEM ARCHITECTURE
It is the architecture for a typical cluster grid. Cluster grids employ standard three-tier system architecture, as shown in figure. The architecture includes front-end access nodes, middle-tier management nodes, and back-end compute nodes. Nodes in the access tier are used for submit, control and monitor jobs, compute tier nodes are used to execute jobs and the management tier nodes run the software needed to implement the cluster grid.
Access Tier
The access tier provides access and authentication services to the cluster grid access. Alternatively, web-based services can be provided to permit easy-or tightly-controlled-access to the facility.
Protocols can also be implemented to allow external services like accounting or analysis programs. Any access method should, of course, be able to integrate with common authentication schemes such as NIS, LDAP, and Kerberos. Furthermore, facilities for mapping external user identities to a local identity can be considered.
Management Tier
This middle tier include one or more servers which run the server elements of client-server software such as Distributed Resource Management (DRM), hardware diagnosis software, and system performance monitors. Additional duties of servers in this tier may also include:
> File server - provide NFS service to other nodes in the Cluster grid.
> License key server - manage software license keys for the cluster grid.
> Software provisioning server-manage operating system and application, software versioning and patch application on other nodes in the cluster grid.
The size and number of servers in this tier will vary depending on the type and level of services to be provided. For small implementations with limited functionality, a single services to be chosen to host all management services for ease of administration. Alternatively, these functions may be provided by multiple servers for greater scalability and flexibility.
Compute Tier
This tier supplies the compute power for the cluster grid. Jobs submitted through tier. Nodes in this tier run the client-side of the DRM software, the daemons associated with message-passing environments, and any agents for system health monitoring. The compute tier communicates with the management tier, receiving jobs to run and reporting job completion status and accounting details.
THE LAYERED ARCHITECTURE
Our goal in describing our grid architecture is to identify requirements for general classes of component. The grid architecture identifies fundamental system components, specifies the purpose and function of these components, and indicates how these components interest with one another. In defining a grid architecture, interoperability is the central issue to be addressed. In a networked environment, interoperability means common protocols. A protocol definition specifies how distributed system elements
Dept of C S E -8-
interest with one another in order to achieve a specified behavior, and the structure of the information exchanged during this interaction.
The layered grid architecture is a protocol architecture, with protocols defining the basic mechanisms by which users and resources establish, manage, and exploit sharing relationships. A standards-based open architecture facilities extensibility, interoperability, portability, and code sharing; standard protocols make it easy to define standard services that provide enhanced capabilities. Standard abstractions, APIs, and SDKs can accelerate code development, enable code sharing, and enhance application portability. APIs and SDKs are an adjunct to, not an alternative to, protocols and services, first; and APIs and SDKs, second.
Our architecture organizes components into layers, as shown in Figure. Components within each layer share common characteristics but can build on capabilities and behaviors provided by any lower layer. The architecture consists of resource and connectivity protocols, which facilities the sharing of individual resources.
THE FABRIC LAYER
Protocols defined at the Fabric layer can be used to construct a wide range of global services and application-specific behaviors at the collective layer. The grid fabric layer provides the resources to which shared access is mediated by grid protocols. Fabric components implement the local, resource-specific operations that occur on specific resources as a result of sharing operating at higher levels. There is thus a tight interdependence between the function implemented at the Fabric level and the sharing operating supported.
THE CONNECTIVITY LAYER
The connectivity layer defines core communication and authentication protocols required for grid-specific network transactions. Communication protocols enable the exchange of data between fabric layer resources. Authentication protocols provide cryptographically secure mechanisms for verifying the identity of users and resources. Communication requirements include transport, routing, and naming. With respect to security aspects of the connectivity layer, we observe that the complexity of the security problem makes it important that any solutions be based on existing standards whenever possible. Users must be able to "long on" (authenticate) just once and then have access
to multiple grid resources defined in the fabric layer, without further user intervention. The internet protocols (the internet (IP and ICMP), transport (TCP, UDP), and application (DNS, OSPF, RSVP, etc) are used for communication. Grid security infrastructure (GSI) protocols are used for authentication, communication protection, and authorization. GSI builds on and extends the Transport layer security (TLS) protocols (29) to address most of the issues listed above:
THE RESOURCE LAYER
The resource layer builds on connectivity layer communication and authentication protocols to define protocols for the secure negotiation, initiation, monitoring, control, accounting, and payment of sharing operations on individual resources. Resource layer implementations of these protocols call Fabric layer functions to access and control local resources. Resource layer protocols are concerned entirely with individual resources and hence ignore issues of global state and atomic actions across distributed collections. Two primary classes of resource layer protocols can be distinguished:
Information protocols are used to negotiate access to a shared resource.
Management protocols are used to negotiate access to a shared resource.
These protocols must be chosen so as to capture the fundamental mechanisms of sharing across many different resource types.
> A grid resource information protocol and associated information model.
> An associated soft-state resource registration protocol, the grid resource registration servers.
> The grid resource access and management (GRAM) protocol is used for allocation of computational resources and for monitoring and control of computation on those resources.
> An extended version of the file transfer protocol, grid FTP, is a management protocol for data access.
> LDAP is used as a catalog access protocol.
THE COLLECTIVE LAYER
The collective layer is global in nature and contains protocols and services that are not associated with any one specific resource n capture interactions across collections of resources. The collective components build on the narrow resource and connectivity layer can implement a wide variety of sharing behaviors without placing new requirements on the resources being shared.
Directory services allow participants to discover the existence and/or properties of resources. Resource-level GRRP and GRIP protocols are used to construct directories, Co-allocation, scheduling, and brokering services allow participants to request the allocation of one or more resources for a specific purpose and the scheduling of tasks on the appropriate resources.
Monitoring and diagnostics services support the monitoring of resources for failure, adversarial attack, overload, and so forth.
Data replication services support the management of storage resources to maximize data access performance with respect to metrics such as response time, reliability, and cost.
Grid-enabled programming systems enable familiar programming models to be used in grid environments, using various grid services to address resource discovery, security, resource allocation, and other concerns.
Workload management systems and collaboration frameworks provide for the description, use, and management of multi-step, asynchronous, multi-component workflows.
Software discovery services discover and select the best software implementation and execution platform based on the parameters of the problem being solved. Community authorization servers enforce community policies governing resource access, generating capabilities that community members can use to access community resources.
Community accounting and payment services gather resource usage information for the purpose of accounting, payment, and/or limiting of resource usage by community members,
Collaborating service support the coordinated exchange of information within potentially large user communities, whether synchronously or asynchronously.
DeptofC S E
-11-

THE APPLICATION LAYER
The application layer comprises the user applications that operate within an organization environment. Applications are constructed in terms of services defined at any layer. At each layer, we have well-defined protocols that provide access to some useful service: resource management, data access, resource discovery, and so forth. At each layer, APIs may also be defined whose implementation (ideally provided by third-party SDKs) exchange protocol messages with the appropriate service(s) to perform desired actions.
The layered grid architecture and its relationship to the internet protocol architecture. Because the internet protocol architecture extends from network to application, there is a mapping from grid layers into internet layers.
GRID & WEB
The grid is a next-generation internet. "The grid" is not an alternative to "the internet": it is rather a set of additional protocols and sen/ices that build on internet protocols and services to support the creation and use of computation or it is a layer of software and services that "sits on top of operating systems and links different systems together, allowing them to share resources - and data-enriched environments. Any resource that is "on the grid" is also, by definition, "on the Net".
The Web is not (yet) a grid: its open, general-purpose protocols support access to distributed resources but not the coordinated use of those resources to deliver interesting qualities of service. Grid computing has been proclaimed as the successor to the Web. Current internet technologies address communication and information exchange among computers but do not provide integrated approaches to the coordinated use of resource at multiple sites for computation. By adding the ability to extensively share computing power, applications and storage to the Web's current ability to share text and multimedia files, problems that require a lot of computing resources can be resolved, devices can work past their own limits and collaboration can become more intense. Let's investigate each of these more closely. The Web has z'ovided a good test bed for grid computing, both through its successes and its Shortcomings. For grid computing to prosper, it will need to solve problems of standards, property rights, access and authorization and modularization and dispatching.
> Standards: there are no clear standards for elements such as security and quality of service.
> Property rights - most software used today is proprietary, and a significant amount of code being written for grid computing is owned by specific businesses. Beyond fair and legal return for the property, proprietary software may stunt the growth and development of grid computing or even determine its future direction.
> Access and authorization - Are all resources available to everyone Most people would not mind if others used spare cycles on their systems in exchange for similar consideration, but content and personal intellectual property (such as term papers and proprietary bidding software) might be more problematic. In addition, security and property rights must be protected.
> Dispatching and modularization - How are resources shared What is the priority Where are assignments dispatched How does one deal with latency These are all nontrivial concerns, and the answers are not clear. A more difficult problem is modularizing code to distribute processing.
The path to the future for grid computing will vary depending upon how these problems are solved. Internet traffic could grow eight times more than forecast over the next decade because of commercial adoption of grid computing.
BENEFITS OF GRID COMPUTING
Grid computing is a model for allowing companies to use a large number of computing resources on demand, no matter where they are located. Grid computing can provide many benefits not available with traditional computing models:
> Better utilization of resources - grid computing uses distributed resources more efficiently and delivers more usable computing power. This can decrease time-to-market, allow for innovation, or enable additional testing and simulation for improved product quality. By employing existing resources, grid computing helps protect IT investments, containing costs while providing more capacity.
> Increased user productivity - by providing transparent access to resources, work can be completed more quickly. Users gain additional productivity as they can focus on design and development rather than wasting valuable time hunting for resources and manually scheduling and managing large numbers of jobs.
> Scalability - Grid can grow seamlessly over time, allowing many thousands of processors to be integrated into one cluster. Components can be updated independently and additional resources can be added as needed, reducing large one-time expenses.
> Flexibility - Grid computing provides computing power where it is needed most, helping to better meet dynamically changing work loads. Grids can contain heterogeneous compute nodes, allowing resources to be added and removed as needs dictate.
> Uniformity - All resources and data should appear to come from the same source even when they don't. Beckhardt refers back to his electrical utility analogy, saying, "When you plug into the wall, you don't know if the power is coming from PG & E or Con Edison."
> Transparency - Researchers should be able to manipulate all of the data available on the grid regardless of the source. In other words, data in different formats and file types should be integrated into a virtual database.
> Reliability - The grid should almost always be available. Think fault tolerance and redundancy. Achieving this requires robust storage, multiple power sources,
and sophisticated networking.
> Pervasiveness - Resources connected to the grid should be available to as many people as possible - a tremendous challenge given the range of platforms and operating systems employed by intended BioGrid users.
> Security - If the databases contain intellectual property, they must be protected. Beckhardt says that a tendency among early grid efforts to neglect data security "has to be resolved before (grid computing) becomes a commercial reality".
APPLICATIONS
The concepts of grid computing started as a project to page link geographically dispersed supercomputers, but now it has grown far beyond its original intent. A grid platform couid be used for many different types of applications. The applications are categorized into four main classes: including collaborative engineering, data exploration, high throughput computing, and distributed supercomputing.
Distributed supercomputing
Distributed supercomputing applications have large computational requirements that can be met only by simultaneous, execution across multiple supercomputers. Distributed computing is a science which solves a large problem by giving small parts of the problem to many computers to solve and then combining the solutions for the parts into a solution for the problem.
Recent distributed computing projects have been designed to use the computers of hundreds of thousands of volunteers all over the world, via the internet. These projects are so large, and require so much computing power to solve, the they would be impossible for any one computer or person to solve in a reasonable amount of time.
Teleimmersive applications
Teleimmersive applications combine simulation, virtual reality, and collaborative environments to provide a shared, virtual design space. Teleimmersive applications can be extremely demanding, requiring large amounts of computation and stringent and diverse network support.
Smart instruments
Computational grids can enhance the power of scientific instruments by providing access to data archives and on-line processing capabilities.
Data intensive applications
Data mining is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. This encompasses a number of different finical approaches, such as clustering, data summarizations, learning classification t5 finding dependency net works, analyzing changes, and detecting anomalies. Data analysis tends to work from the data up and the best techniques are those
developed with an orientation towards large volumes of data, making use of as much of the collected data as possible to arrive at reliable conclusions and decisions. The analysis process starts with a set of data, uses a methodology to develop an optimal representation of the structure of the data during which time knowledge is acquired. One knowledge has been acquired this can be extended to larger sets of data working on the assumption that the larger data set has a structure similar to the sample data.
Collaborative design
When collaboration is conducted in a distributed and indirect way, information such as documented results of design decisions and ideas can be shared easily among participants of a working team.
In design behavior, conflicts and argument are regular. Hence designers need to dynamically exchange their opinions both formally and informally using multi media integrated into the system. Collaborative software can help to manage and record information of every process of the product design. Computer based design environment integrates the various knowledge and experience together. Every team member is able to contribute ideas, and the team can jointly explore design concepts early in the project.
CONCLUSION
Applying the resources of many computers in a network to a single problem at the same time - usually a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. Grid computing uses software to divide and farm out pieces of a program to as many as several thousand computers. A number of corporations, professional groups and university consortia have developed frameworks and software for managing grid computing projects. Grid computing provides clustering of remotely distributed computing. The principal focus of grid computing to date has been on maximizing the use of available processor resources for compute-intensive applications. Grid computing along with storage virtualization and ser CONTENTS
1. Introduction '
2. What is grid computing 2
3. Origin of grid computing 3
4. Different type of grid 4
5. Levels of developments 6
6. The Grid Architecture 7
7. The Three Tire System Architecture 7
8. The Layered Architecture 8
9. Grid And Web 12
10. Benefits of Grid Computing 14
I I. Applications *»
12. Conclusion '°
er virtua
Reply
#4
Abstract
Grid computing, emerging as a new paradigm for next-generation computing, enables the sharing, selection, and aggregation of geographically distributed heterogeneous resources for solving large-scale problems in science, engineering, and commerce. The resources in the Grid are heterogeneous and geographically distributed. Availability, usage and cost policies vary depending on the particular user, time, priorities and goals. It enables the regulation of supply and demand for resources.
It provides an incentive for resource owners to participate in the Grid; and motivates the users to trade-off between deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for wide-area parallel and distributed computing by developing users™ quality-of-service requirements-based scheduling strategies, algorithms, and systems. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep”task and data parallel”applications.
This paper focuses on introduction, grid definition and its evolution. It covers about grid characteristics, types of grids and an example describing a community grid model. It gives an overview of grid tools, various components, advantages followed by conclusion and bibliography.
Abstract
Data warehousing provides architectures and tools for business executives to systematically organize, understand, and use their data to make Strategic decisions. Data Warehouses are arguably among the best resources a modern company owns. As enterprises operate day in day out, the data warehouse may be updated with a myriad of business process and transactional information: orders, invoices, customers, shipments and other data together form the corporate operations archive.
As the volume of data in the Warehouse continues to grow so, the time it takes to mine (extract) the required data, individual queries, as well as loading data can consume enormous amounts of processing Power and time, impeding other data warehouse activity; and customers experiences slow response times while the information technology(IT) budget shrinks-this is the data warehouse dilemma
Ideal solutions, which perform all the work speedily and with out cost, are obviously impractical to consider. The near-ideal solution would, 1) help reduce load process time, and 2) optimize available resources for the analysis, to achieve these two tasks we need to invest in buying additional compute resources.

There is a solution, however, enabling the ability to gain compute resources without purchasing additional hardware: Grid computing.
Grid Computing provides a novel approach to harnessing distributed resources, including applications, computing platforms or databases and file systems. Applying Grid computing can drive significant benefits to the business by improving information access and responsiveness.
The Grid-enabled application layer dispatches jobs in parallel to multiple compute nodes; this parallelization of previously serial tasks to multiple CPUâ„¢s is where Grid gets its power. Grids can benefit from sharing existing resources and adding dedicated resources such as clusters to improve throughput.
Finally, Grid Computing solution enables the ability to gain compute resources with out purchasing additional hardware.
Reply
#5
PAPER PRESENTATION ON GRID COMPUTING

Presented by:
1) KSHEERSAGAR GEETA 2) NALAWADE SWATI
Shivnagar Vidya Prasarak Mandalâ„¢s College Of Engineering
Malegaon (Bk.)

ABSTRACT
Grid computing can mean different things to different individuals. The grand vision is often presented as an analogy to power grids where users (or electrical appliances) get access to electricity through wall sockets with no care or consideration for where or how the electricity is actually generated. In this view of grid computing, computing becomes pervasive and individual users (or client Applications) gain access to computing resources (processors, storage, data, Applications, and so on) as needed with little or no knowledge of where those Resources are located or what the underlying technologies, hardware, operating system, and so on are.

Therefore, grid computing can be seen as a journey along a path of integrating various technologies and solutions that move us closer to the final goal. It™s key Values are in the underlying distributed computing infrastructure technologies that are evolving in support of cross-organizational application and resource sharing”in a word, virtualization”virtualization across technologies, platforms, and organizations. This kind of virtualization is only achievable through the use of open standards. Open standards help ensure that applications can transparently take advantage of whatever appropriate resources can be made available to them.
CONTENTS:
1) Background ¦¦¦¦¦¦¦¦¦¦¦¦ 4
2) What is it ¦¦¦¦¦¦¦¦¦¦¦¦. 4
3) Who is doing it ¦¦¦¦¦¦¦¦¦¦ 5
4) How does it work ¦¦¦¦¦¦¦¦¦. 5
5) Why is it significant ¦¦¦¦¦¦¦¦.. 7
6) Where is it going ¦¦¦¦¦¦¦¦¦.. 8
7) What are the downsides ¦¦¦¦¦¦¦ 8
8) What are the applications ¦¦¦¦¦¦ 8

9) Conclusion ¦¦¦¦¦¦¦¦¦¦¦¦ 9
10) References ¦¦¦¦¦¦¦¦¦¦¦¦..10
Background
Increased network bandwidth, more powerful computers, and the acceptance of the Internet have driven the on-going demand for new and better ways to compute. Commercial enterprises, academic institutions, and research organizations continue to take advantage of these advancements, and constantly seek new technologies and practices that enable them to seek new ways to conduct business. However, many challenges remain. Increasing pressure on development and research costs, faster time-to-market, greater throughput, and improved quality and innovation are always foremost in the minds of administrators - while computational needs are outpacing the ability of organizations to deploy sufficient resources to meet growing workload demands.
On top of these challenges is the need to handle dynamically changing workloads. The truth is, flexibility is key. In a world with rapidly changing markets, both research institutions and enterprises need to quickly provide compute power where it is needed most. Indeed, if systems could be dynamically created when they are needed, teams could harness these resources to increase innovation and better achieve their objectives.
What is it

Computing grids are conceptually not unlike electrical grids. In an electrical grid, wall outlets allow us to page link to an infrastructure of resources that generate, distribute, and bill for electricity. When you connect to the electrical grid, you donâ„¢t need to know where the power plant is or how the current gets to you. Grid computing uses middleware to coordinate disparate IT resources across a network, allowing them to function as a virtual whole. The goal of a computing grid, like that of the electrical grid, is to provide users with access to the resources they need, when they need them.


Grids address two distinct but related goals: providing remote access to IT assets, and aggregating processing power.
The most obvious resource included in a grid is a processor, but grids also encompass sensors, data-storage systems, applications, and other resources. One of the first commonly known grid initiatives was the SETI@home project, which solicited several million volunteers to download a screensaver that used idle processor capacity to analyze data in the search for extraterrestrial life. In a more recent example, the
Telescience Project provides remote access to an extremely powerful electron microscope at the National Center for Microscopy and Imaging Research in San Diego. Users of the grid can remotely operate the microscope, allowing new levels of Access to the instrument and its capabilities.
Who is doing it

Many grids are appearing in the sciences, in fields such as chemistry,
Physics, and genetics, and cryptologists and mathematicians
Have also begun working with grid computing. Grid technology has
The potential to significantly impact other areas of study with heavy
computational requirements, such as urban planning. Another important area for the technology is animation, which requires massive amounts of computational power and is a common tool in a growing number of disciplines. By making resources available to students, these communities are able to effectively model authentic disciplinary practices.
How does it work

Grids use a layer of middleware to communicate with and manipulate
heterogeneous hardware and data sets. In some fields”astronomy, for example”hardware cannot reasonably be moved and is prohibitively expensive to replicate on other sites. In other instances, databases vital to research projects cannot be duplicated and transferred to other sites. Grids overcome these logistical obstacles and open the tools of research to distant faculty and students.

A grid might coordinate scientific instruments in one country
With a database in another and processors in a third. From a userâ„¢s
Perspective, these resources function as a single system differences in platform and location become invisible. On a typical college or university campus, many computers sit idle much of the time.
A grid can provide significant processing power for users with extraordinary needs. Animation software, for instance, which is used by students in the arts, architecture, and other departments, eats up vast amounts of processor capacity.
An industrial design class might use resource-intensive software to render highly detailed three-dimensional images. In both cases, a campus grid slashes the amount of time it takes students to work with
these applications. All of this happens not from additional capacity but through the efficient use of existing power.
Grid computing operates on these basic technology principles:
¢ Standardization “ IT departments have enjoyed much greater interoperability and reduced systems management overhead by standardizing operating systems, server and storage hardware, middleware components and
network components in their procurement activities.
This helps to reduce operational complexity in the data center by simplifying application deployment, configuration and integration.
¢ Virtualization “ virtualization abstracts underlying IT resources, enabling much greater flexibility in how they are used. Virtualized IT resources means that applications are not tied to specific server, storage and network components. Applications are able to use any virtualized IT resource. Virtualization is accomplished through a sophisticated software layer that hides the underlying complexity of IT resources and presents a simplified, coherent interface to be used by applications or other IT resources.
On-demand Provisioning - IT resources must be easily provisioned, meaning allocated, configured and maintained by grid management tools.
As different parts of the system require additional computing power, such as when many new users are added, IT professionals need the ability to quickly and accurately establish user accounts and security privileges and allocate storage and computing capacity. In grid computing, powerful provisioning and resource management software determines how to meet the specific needs of users, while optimizing performance of the system as
a whole.
¢ Automation “ virtualization and provisioning can only be accomplished with large scale automation of IT operations such as system installation, patching, server cloning, workload management, user account creation and so on. In years past, IT staff created some of this automation through custom programs and scripts, but many have discovered that this does not
scale effectively. Out-of-the-box management automation from system Standardize hardware and software components to reduce the risk of costly incompatibility and integration issues Virtualizes
¢ Infrastructure Resources:
Pool hardware and systems software into a single virtual resource.
Provision Infrastructure Resources:
Allocate capacity on demand based on policies to meet individual needs and optimize the system as a whole.
Oracle Grid Computing Page 4
providers such as Oracle can significantly boost productivity of system administrators.
¢ Real-time and Predictive Monitoring “ with the growing scale and complexity of data center implementations, IT departments can no longer afford to work reactively to potential problems as they arise. IT professionals need increasingly sophisticated tools to monitor a vast number of systems in real time and predict problems before they occur.
Grid computing relies on policy-based monitoring and management of quality-of-service thresholds and top down applications management.
This enables IT staff to quickly identify the root cause of a problem or potential problem from the lowest level hardware issues through the database, middleware and user interface tiers.
Why is it significant
Grids make research projects possible that formerly were impractical or unfeasible due to the physical location of vital resources.
Using a grid, researchers in Great Britain, for example, can conduct research that relies on databases across Europe, instrumentation in Japan, and computational power in the United States. Making resources available in this way exposes students to the tools of the profession, facilitating new possibilities for research and instruction, particularly at the undergraduate level.
Although speeds and capacities of processors continue to increase,
Resource-intensive applications are proliferating as well. At many institutions, certain campus users face ongoing shortages of computational power, even as large numbers of computers are underused. With grids, programs previously hindered by constraints on computing power become possible.
Grid computing appears to be a promising trend for three reasons:
1) Its ability to make more cost-effective use of a given amount of computer resources,
2) As a way to solve problems that can't be approached without an enormous amount of computing power, and
3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as collaboration toward a common objective. In some grid computing systems, the computers may collaborate rather than being directed by one managing computer. One likely area for the use of grid computing will be pervasive computing applications - those in which computers pervade our environment without our necessary awareness.
What are the downsides

Being able to access distant IT assets”and have them function seamlessly with tools on different platforms”can be a boon to
Researchers, but it presents real security concerns to organizations responsible for those resources. An institution that makes its IT assets available to researchers or students on other campuses and in other countries must be confident that its involvement does not expose those assets to unnecessary risks. Similarly, directors of research projects will be reluctant to take advantage of the opportunities of a grid without assurances that the integrity of the project, its data, and its participants will be protected.
Another challenge facing grids is the complexity in building middleware structures that can knit together collections of resources to work as a unit across network connections that often span oceans and continents. Scheduling the availability of IT resources connected to a grid can also present new challenges to organizations that manage those resources. Increasing standardization of protocols addresses some of the difficulty in creating smoothly functioning grids, but, by their nature, grids that can provide unprecedented access to facilities and tools involve a high level of complexity.
Where is it going
Because the number of functioning grids is relatively small, it may take time for the higher education community to capitalize on the opportunities that grids can provide and the feasibility of such projects.
As the number and capacity of high-speed networks increase, however, particularly those catering to the research community and higher education, new opportunities will arise to combine IT assets in ways that expose students to the tools and applications relevant to their studies and to dramatically reduce the amount of time required to process data-intensive jobs. Further, as grids become more widespread and easier to use, increasing numbers and kinds of IT resources will be included on grids. We may also start to see more grid tie-ins for desktop applications. While there are obvious advantages to solving a complex genetic problem using grid computing, being able to harness spare computing cycles to manipulate an image in Photoshop or create a virtual world in a simulation may be some of the first implementations of grids.
What are the applications
Higher education stands to reap significant benefits from grid computing by creating environments that expose students to the tools of the trade in a wide range of disciplines. Rather than using mock or historical data from an observatory in South America, for example, a grid could let students on other continents actually use those facilities and collect their own data. Learning experiences become far richer, providing opportunities that otherwise would be impossible or would require travel. The access that grid computing offers to particular resources can allow institutions to deepen, and in some cases broaden, the scope of their educational programs.
Grid computing encourages partnerships among higher education institutions and research centers. Because they bring together unique tools in novel groupings, grids have the potential to incorporate technology into disciplines with traditionally lower involvement with IT, including the humanities, social sciences, and the arts. Grids can leverage previous investments in hardware and infrastructure to provide processing power and other technology capabilities to campus constituents who need them. This reallocation of institutional resources is especially beneficial for applications with high demands for processing and storage, such as modeling, animations, digital video production, or biomedical studies.
The applications of grid computing are:
1) Information technology :
Improve the asset optimization, quickly respond to various demands.
2) Business value :
Improve the operating efficiency, reduces capital expenses, and accelerates business processes.
3) Learning and teaching :
Higher education stands to reap significant benefits from grid computing by creating environments that expose students to the tools of the trade in a wide range of disciplines.

Thus, grid computing in 2003 introduced a state of the art methodology and a set of new database and middleware capabilities that have helped evolve the way IT departments operate.
CONCLUSION
The Grid -- the IT infrastructure of the future -- promises to transform computation, communication, and collaboration. Over time, these will be seen in the context of grids -- academic grids, enterprise grids, research grids, entertainment grids, community grids, and so on. Grids will become service-driven with lightweight clients accessing computing resources over the Internet. Datacenters will be safe, reliable, and available from anywhere in the world. Applications will be part of a wide spectrum of network-delivered services that include compute cycles, data processing tools, accounting and monitoring, and more.
¢ Grid computing and related technologies will only be adopted by commercial users if they are confident that their data and privacy can be adequately protected and that the Grid will be at least as scaleable, robust and reliable as their own in-house IT systems. Thus, new Internet technologies and standards such as IPv6 take on even greater importance. Needless to say, users of the Grid want easy, affordable, ubiquitous, broadband access to the Internet.
¢ Similar to the public policy issues raised by the development of electronic commerce and electronic government, Grids raise a number of
public policy issues: data privacy, information and cyber security, liability, antitrust, intellectual property, access, taxes, tariffs, as well as usage for education, government, and regional development.
¢ WITSA continues to address public policy issues likely to affect the future development and deployment of the Grid. WITSA works with governments and international organizations to ensure that appropriate government policies address legitimate concerns of users in such a way as to facilitate rather than hinder technological developments of the Grid.
References
1) educause.edu/eli
2)http://adarshpatilnewsite/research.htm
3)Grid computing : a practical guide to technology and applications
4) http://sungrid
5) http://ibmredbook
Reply
#6
Abstract

Mankind is right in the middle of another evolutionary technological transition which once more will change the way we do things. And, you guessed right, it has to do with the Internet. Itâ„¢s called "The Grid", which means the infrastructure for the Advanced Web, for computing, collaboration and communication.

Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.

This paper aims to present the state-of-the-art concepts of Grid computing. A set of general principles, services, benifits and Architecture that can be followed in the Grid construction are given. One of the Grid application project Legion is taken up. We conclude with future trends in this yet to be conquered technology
Reply
#7
THANKS, I REALLY NOW I BECOME BURDEN FREE.
WHAT TYPE OF PROJECTS R DEVELOPED IN GRID COMUTING FIELD.
WHICH PART(AREA) IS BEST TO SELECT IN GRID COMPUTING FOR FURTHER WORK
Reply
#8
[attachment=5127]
Grid Computing seminar report

By
Anjali P
(Y2m014)
S4 MCA
Department of Computer Science and Engineering
National Institute of Technology
Calicut – 673601



ABSTRACT

Network speed ,storage capacity and processor speed is increasing every year, but at the same time a large portion of the computing capacity is left unused .So we go for grid computing. Grid computing is an emerging technology, where we can unite a pool of servers, PCs, storage systems and networks into one large system to deliver nontrivial qualities of service.For an end user or application it looks like one big virtual computing system.Grid computing is a network of computation. Grid technology allows organizations to use numerous computers to solve problems by sharing computing resources.These resources could be hetrogeneous or distributed across the globe. So we need open standards and protocols. The architecture for grid computing is defind in Open Grid Service Architecture (OGSA) developed through the Global Grid Forum. OGSA defines what grid services are and the overall structure and services to be provided in grid environments.The key components in grid computing are portal or user interface, security, information service, scheduler, data management, job and resource management. Industries using grid computing now are automotive and aerospace, financial services, life sciences, government organisations like military departments, advanced data and compute intensive research.
Reply
#9

[attachment=6558]
GRID COMPUTING


GRID COMPUTING

SEMINAR ON GRID COMPUTING
Grid Computing is a technique in which the idle systems in the Network and their “wasted” CPU cycles can be efficiently used by uniting pools of servers, storage systems and networks into a single large virtual system for resource sharing dynamically at runtime.
- High performance computer clusters.
-share application, data and computing resources.


IMPORTANCE OF GRID COMPUTING
Flexible, Secure, Coordinated resource sharing.
Virtualization of distributed computing resources.
Give worldwide access to a network of distributed resources.


GRID REQUIREMENTS
Security
Resource Management
Data Management
Information Services
Fault Detection
Portability


TYPES OF GRID
Computational Grid
-computing power
Scavenging Grid
-desktop machines
Data Grid
-data access across multiple organizations


ARCHITECTURAL OVERVIEW
- Grid’s computer can be thousands of miles apart and connected with internet networking technologies.
- Grids can share processors and drive space.


Fabric : Provides resources to which shared access is mediated by grid protocols.
Connectivity : Provides authentication solutions.
Resources : Connectivity layer, communication and authentication protocols.
Collective : Coordinates multiple resources.
Application : Constructed by calling upon services defined at any layer.

GRID COMPONENTS
In a world-wide Grid environment, capabilities that the infrastructure needs to support include:
Remote storage
Publication of datasets
Security
Uniform access to remote resources
Publication of services and access cost
Composition of distributed applications
Discovery of suitable datasets
Discovery of suitable computational resources
Mapping and Scheduling of jobs
Submission, monitoring, steering of jobs execution
Movement of code
Enforcement of quality
Metering and accounting

GRID LAYERS
Grid Fabric layer
Core Grid middleware
User-level Grid middleware
Grid application and protocols
OPERATIONAL FLOW FROM USER’S PERSPECTIVE
- Installing Core Gridmiddleware
- Resource brokering and application deployment services

COMPONENT INTERACTION
- Distributed application
- Grid resource broker
- Grid information service
- Grid market directory
- Broker identifies the list of computational resources
- Executes the job and returns results
- Metering system passes the resource information to the accounting system
- Accounting system reports resource share allocation to the user

PROBLEM AND PROMISES
PROBLEMS
- Coordinated resource sharing and problem solving in dynamic, institutional organizations
- Improving distributed management
- Improving the availability of data
- Providing researchers with a uniform user friendly environment
PROMISES
- Grid utilizes the idle time
- Its ability to make more cost-effective use of resources
- To solve problems that can’t be approached without any enormous amount of computing power.
Reply
#10
[attachment=6621]
Grid Computing



INTRODUCTION
The popularity of the Internet as well as the availability of powerful
computers and high-speed network technologies as low-cost commodity
components is changing the way we use computers today. These technology
opportunities have led to the possibility of using distributed computers as a
single, unified computing resource, leading to what is popularly known as Grid
computing. The term Grid is chosen as an analogy to a power Grid that provides
consistent, pervasive, dependable, transparent access to electricity irrespective
of its source. A detailed analysis of this analogy can be found in. This new
approach to network computing is known by several names, such as
metacomputing, scalable computing, global computing, Internet computing, and
more recently peer-to- peer (P2P) computing.
Reply
#11


Using the GridSim Toolkit for Enabling Grid Computing Education



Manzur Murshed Rajkumar Buyya
School of Computer Science and Software Engineering
Gippsland School of Computing and Information Technology
Monash University, Gippsland Campus
Churchill, VIC 3842, Australia



Abstract
Numerous research groups in universities, research labs, and industries around the world are now working on Computational Grids or simply Grids that enable aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce. Several institutions and universities have started research and teaching programs on Grid computing as part of their parallel and distributed computing curriculum. The researchers and students interested in resource management and scheduling on Grid need a testbed infrastructure for implementing, testing, and evaluating their ideas. Students often do not have access to the Grid testbed and even if they have access, the testbed size is often small, which limits their ability to test ideas for scalable performance and large-scale evaluation. It is even harder to explore large-scale application and resource scenarios involving multiple users in a repeatable and comparable manner due to dynamic nature of Grid environments. To address these limitations, we propose the use of simulation techniques for performance evaluation and advocate the use of a Java-based discrete event simulation toolkit, called GridSim. The toolkit provides facilities for modeling and simulating Grid resources (both time and space-shared high performance computers) and network connectivity with different capabilities and configurations. We have used GridSim toolkit to simulate Nimrod-G like Grid resource broker that supports deadline and budget constrained cost and time minimization scheduling algorithms.


for more ::>

http://buyyapapers/gridsimedu.pdf
Reply
#12
[attachment=7480]
Grid Computing


 Presentation agenda
 Introduction.
 Background.
 Definition.
 Why it is?
 How it works?
 Applications
 Entry to Grid
 Adv. & Dis adv.
 Conclusion.
 Introduction to Grid Computing
 The term Grid comes from an analogy to the
Electric Grid.
– Pervasive access to power.
– Similarly, Grid will provide pervasive, consistent, and inexpensive access to advanced computational resources.
 Grid computing is all about achieving greater.
performance and throughput by pooling resources on a local, national, or international level
 Scalable Computing
 Definition
 What is Grid computing?
Two or more computers improving
performance
scalability
 Compute Grids= Parallel Execution
 Data Grids = parallelize data storage
 Grid computing= Compute Grids+ Data Grids
 Cousins of Grid Computing
 Parallel Computing
 Distributed Computing
 Peer-to-Peer Computing
 Many others: Cluster Computing, Network Computing, Client/Server Computing, Internet Computing, etc...
 Why Grids ?
 Solving grand challenge applications using computer modeling, simulation and analysis
 What is Grid ?
 A paradigm/infrastructure that
enabling the sharing, selection, & aggregation
of geographically distributed resources:
 Computers – PCs, workstations, clusters, supercomputers,
laptops, notebooks, mobile devices, PDA, etc;
 Software – e.g., ASPs renting expensive special purpose applications on demand;
 Catalogued data and databases – e.g. transparent access to
human genome database;
 Special devices/instruments – e.g., radio telescope – SETI@Home searching for life in galaxy.
 People/collaborators.
[depending on their availability, capability, cost, and user QoS requirements]
for solving large-scale problems/applications.
 Are Grids a Solution?
 What does the Grid do for you?
 You submit your work
 And the Grid
– Finds convenient places for it to be run
– Organises efficient access to your data
 Caching, migration, replication
– Deals with authentication to the different sites that you will be using
– Interfaces to local site resource allocation mechanisms, policies
– Runs your jobs, Monitors progress, Recovers from problems,Tells you when your work is complete
 What does the Grid do for you?
 If there is scope for parallelism, it can also decompose your work into convenient execution units based on the available resources, data distribution
 Main components
 How it works
 The grid computing concept isn't a new one.
 It's a special kind of distributedcomputing.
 In distributed computing, different computers within the same network share one or more resources. In the ideal grid computing system, every resource is shared, turning a computer network into a powerful supercomputer.
 WORKING….
 All the available resources (work stations, servers, software, storage, etc.) as well as a set of tools that could be compared to an operating system, make up the computing grid.
 WORKING….
 At the core, a resource broker, which handles resource supply and demand according to technical and economic criteria.
 A scheduler is responsible for distributing resources to the various machines.
 Security and access are in turn managed by the Grid Security Infrastructure, which handles the identification of each resource solicitor as well as access authorization up to a certain level to guarantee confidentiality.
 Types of grid
 Computational grid: A computational grid is focused on setting aside resources specifically for computing power. In this type of grid, most of the machines are high-performance servers
 Scavenging grid: A scavenging grid is most commonly used with large numbers of desktop machines. Machines are scavenged for available CPU cycles and other resources.
 Data grid: A data grid is responsible for housing and providing access to data across multiple organizations.
 Users are not concerned with where this data is located.
 Layered Grid Architecture
 FABRIC LAYER:
INTERFACES TO LOCAL CONTROL
 The Grid Fabric layer provides the resources to which shared access is mediated by Grid protocols.
 CONNECTIVITYLAYER:
COMMUNICATING EASILY AND SECURELY
 The Connectivity layer defines core communication and authentication protocols required for Grid-specific network transactions.
 Communication protocols enable the exchange of data between Fabric layer resources.
 RESOURCE LAYER:
SHARING SINGLE RESOURCE
 The Resource layer builds on Connectivity layer communication and authentication protocols to define protocols (and APIs and SDKs) for the secure negotiation, initiation, monitoring, control, accounting, and payment of sharing operations on individual resources.
 COLLECTIVE:
COORDINATING MULTIPLE RESOURCES
 contains protocols and services (and APIs and SDKs) that are not associated with any one specific resource but rather are global in nature and capture interactions across collections of resources.
 Biomedical applications
 Earth sciences
 Earth Observations by Satellite
– Ozone profiles
 Solid Earth Physics
– Fast Determination of mechanisms
of important earthquakes
 Hydrology
– Management of water resources
in Mediterranean area (SWIMED)
 Geology
– Geocluster: R&D initiative of the
Compagnie Générale de Géophysique
 A large variety of applications is the key !!!
 GARUDA
 Department of Information Technology (DIT), Govt. of India, has funded CDAC to deploy computational grid named GARUDA as Proof of Concept project.
 It will connect 45 institutes in 17 cities in the country at 10/100 Mbps bandwidth.
 Other Grids in India
 ADVANTAGES:
 • Can solve larger, more complex problems in a shorter time
• Easier to collaborate with other organizations
• Make better use of existing hardware
 DISADVANTAGES:
 Grid software and standards are still evolving

• Non-interactive job submission
 CONCLUSION:
 Grid computing provides a framework and deployment platform that enables resource sharing, accessing, aggregation, and management in a distributed computing environment based on system performance, users' quality of services, as well as emerging open standards, such as Web services. This is the era of Service Computing
Reply
#13
hope they will do something o the behalf of GOVT in improving this area of field too .
as there is lot of funding required about this area of concern .
Reply
#14


[attachment=7779]

ABSTRACT
Grid computing, emerging as a new paradigm for next-generation computing, enables the sharing, selection and aggregation of geographically distributed heterogeneous resources for solving large scale problems in science, engineering, and commerce. The resources in the grid are heterogeneous and geographically distributed. Availability, usage and cost policies vary depending on the particular user, time, priorities and goals. It enables the regulation of supply and demand for resource; provides and incentive for resource owners who participate in the grid; and motivates the users to trade of between deadline, budget, and the required level of quantity – of –service. A grid is a collection of mechanics, sometimes referred to as “nodes”, “resources”, “members”, “donors”, “clients”, “host”, “engines” and many other such terms. They all contribute any combination of resources to the grid as a whole. Some resources may be used by all users of the grid while others may have specific restrictions. IT budgets are declining and data continues to grow at an exponential rate. SAS applications with high volumes of data can take


many hours or possibly days or even weeks to complete. In some cases, a job is so big that it
cannot be completed at all even given today’s processor speeds. Whether you call it grid computing or not, you want to be able to complete your SAS applications in a reasonable amount of time without making a huge investment in new hardware.

OVERVIEW
 What is grid?
 Grid infrastructure requirements.
 Types of grid
 Grid components and services
 Applications
 Grid computing standards
 Security
 Future trends
 Future direction for SAS & Grid computing
 Case studies
 Conclusion

WHAT IS A GRID?
Advances in computer technology, including ever more powerful hardware and increasingly sophisticated software, have made it possible to apply computers to solving a wide range of complex problems in the fields of science, engineering, and business. There are still any numbers of problems that are beyond the capabilities of the current generation of supercomputers, however. Furthermore, the nature of these problems often requires access to resources not often found on a single computer. Grid computing provides one type of solution to these issues.
The original purpose behind Grid computing was to page link together supercomputers spread across wide distances, but the aims have since moved beyond this scope. The term Grid was coined as an analogy with the power grid, supplying consistent, dependable, and transparent access to an electrical supply. Grid computing is intended to provide an equally consistent, dependable, and transparent collection of computing resources.
A Grid comprises a network of resources, each of which operates autonomously under local, control, but which collaborate and communicate with each other. In this respect, a Grid differs from other architectures, such as a cluster where distributed resources are typically owned and managed by a centralized resource management and scheduling system (all users of a cluster connect through a centralized system that allocates resources to tasks).
Grids can be constructed using entire clusters as nodes in the Grid, together with other localized low-level middleware systems. Grids can additionally make use of other distributed paradigms; the Globus OGSI* (Open Grid Services Infrastructure) is based on Web services.

Grid Infrastructure Requirements
A key precept of the Grid paradigm is that Grids should be transparent and seamless. Users: Applications and services should be able to view the Grid as a single virtual computer. Grid architectures are based on resource brokers, resolves, and other pieces of Grid middleware that perform resource discovery, scheduling, and processing of jobs.
In order to maintain the seamless nature of a Grid, any architecture must consider a number of issues, including the following:
 The need to respect the local autonomy of the various administrative domains that comprise the Grid. The systems linked together will be managed by local administrators who must be allowed to implement their own security policies and protect their own resources as they see fit.
 The different computing resources will inevitably span a variety of heterogeneous hardware.
 An appreciation of the dynamic nature of the Grid. Computers may join or leave the Grid at any time. The architecture implemented by the Grid must be scalable, supporting anything from a small number of nodes to thousands of computers, without imposing an overhead that degrades performance. .
 The importance of resilience. In any network, the chances of a single node failing increases as more and more nodes are added to the system. In a network involving many thousands of computers, it is likely that at least some computers will be offline. The Grid must be able to adapt dynamically, maintaining an up-to-date catalog of available resources.
 A Grid-must is non-intrusive to applications, services, and users not making use of it. There should be no observable degradation in service to local users accessing a computer that is also part of a Grid. This goal can be accomplished by careful scheduling of Grid tasks, and by ensuring that those tasks execute at a suitably low priority.


TYPES OF GRID
Many grid implementations are oriented towards supplying specific types of resources. Grids can be categorized according to these resources. The most common types of grid are
 Computational Grids: It provides resources for executing tasks, using spare cpu cycles on networked computers. Grid tasks are often scheduled to run as background tasks, to be performed when no higher priority local jobs are being executed.
 Data grids: It provides secure access to, and management of, large distributed data sets. A data grid typically implements replication and catalogue services, giving the illusion that the entire data set is actually held on a single piece of data storage. The data is usually processed using a computational grid.
 Application grids: It extends the notions of computational and data grids to provide transparent access to remote libraries and applications. In many instances, they can be implemented using web services acting as facades for remote services in conjunction with UDDI (universal description, discovery, integration), proving location transparency.
Other types of grid are available, knowledge grids, for ex., provide services that use information to help solve particular problems using specific algorithms.

Grid Components and Services
A grid must be designed to provide services that hide the underlying differences between the computers in the network and present a single, unified view of the entire scheme:
Communications: A grid can comprise a variety of network technologies of varying quality, and it can implement many different protocols. The communications infra structure provides by a grid must be robust enough to handle and resolve communications failures between nodes, and it must support protocols that can transmit much diverse type of data in a reliable manner.
 Authentication and Authorization: In any networked environment, security is a complex issue. With grids, that complexity is particularly acute .grid security must interoperate with local security systems.

 Naming services and location Transparency: Resources must be identifiable and locatable. A single uniform name space that spans entire grid is essential.a grid- wide directory service such as grid index information services (GI IS) can combine views from multiple local catalogues, usually based on standard protocols such as LDAP (light weight directory access protocol).

 Distributed File System: Distributed applications executing on a Grid need access to data held in files that may be spread across a large number of computers. A distributed file system provides a single view of the data storage available throughout the Grid and makes the physical location of files transparent to applications accessing those files.

 Resource Management: Different network applications can have varying network flows, incorporating periods of high and low latency. A Grid must provide a sufficient quality of service to cater to these differing rates and to ensure resource availability whenever possible.

 Fault Tolerance: It is vital that Grids provide tools for monitoring, maintaining, and reconfiguring resources. These tools can be used to implement transparent failover in the event that a particular resource becomes unavailable.
The facilities of a Grid should be .easily accessible to users and administrators. It is common, therefore, to provide graphical interfaces that allow users to submit jobs and monitor tasks as they are executed by the Grid. The Internet supplies an ideal framework for providing access to remote services, due to the connectivity available and the portable nature of the interfaces that can be generated.

APPLICATIONS
Applications developed to take advantage of a Grid can be built using Grid-enabled tools and technologies. A Grid should provide the interfaces, libraries, utilities, and programming APIs to support the development effort required. Common tools and libraries for building Grid applications include High Performance C++ (HPC++) and the Message Passing Interface (MPI).
 Consist of user applications
 Can be constructed by services defined at any layer

GRIDCOMPUTINGSTANDARDS
The Globus Toolkit has emerged as the-de facto standard for grid middleware. The Globus Alliance conducts research and development to create fundamental technologies behind the Grid, which lets people share computing power, databases, and other on-line tools securely across corporate, institutional, and
geographic boundaries without sacrificing local autonomy. Globus has protocols to handle grid resource management. These are:
 Grid Resource Management Protocol (GRAM)
 Information Services: Monitoring and Discovery Service (MDS)
 Data Movement and management: Global Access to Secondary Storage (GASS)
 Grid FTP




Reply
#15
please help me i need the grid computing ppt.
Reply
#16
hello
projectsofme has posted a ppt in this thread. please check 3rd page carefully.
Reply
#17


[attachment=8420]

Dheeraj Bhardwaj
Department of Computer Science and Engineering
Indian Institute of Technology, Delhi


Outline

The technology landscape
Grid computing
The Globus Toolkit
Applications and technologies
Data-intensive; distributed computing; collaborative; remote access to facilities
Grid infrastructure
Open Grid Services Architecture
Global Grid Forum
Summary and conclusions

Living in an Exponential World: (2) Storage

Storage density doubles every 12 months
Dramatic growth in online data (1 petabyte = 1000 terabyte = 1,000,000 gigabyte)
2000 ~0.5 petabyte
2005 ~10 petabytes
2010 ~100 petabytes
2015 ~1000 petabytes?
Transforming entire disciplines in physical and, increasingly, biological sciences; humanities next?

Data Intensive Physical Sciences

High energy & nuclear physics
Including new experiments at CERN
Gravity wave searches
LIGO, GEO, VIRGO
Time-dependent 3-D systems (simulation, data)
Earth Observation, climate modeling
Geophysics, earthquake modeling
Fluids, aerodynamic design
Pollutant dispersal scenarios
Astronomy: Digital sky surveys

Ongoing Astronomical Mega-Surveys

Large number of new surveys
Multi-TB in size, 100M objects or larger
In databases
Individual archives planned and under way
Multi-wavelength view of the sky
> 13 wavelength coverage within 5 years
Impressive early discoveries
Finding exotic objects by unusual colors
L,T dwarfs, high redshift quasars
Finding objects by time variability
Gravitational micro-lensing

Coming Floods of Astronomy Data

The planned Large Synoptic Survey Telescope will produce over 10 petabytes per year by 2008!
All-sky survey every few days, so will have fine-grain time series for the first time

Data Intensive Biology and Medicine

Medical data
X-Ray, mammography data, etc. (many petabytes)
Digitizing patient records (ditto)
X-ray crystallography
Molecular genomics and related disciplines
Human Genome, other genome databases
Proteomics (protein structure, activities, …)
Protein interactions, drug delivery
Virtual Population Laboratory (proposed)
Simulate likely spread of disease outbreaks
Brain scans (3-D, time dependent)

Evolution of Business

Pre-Internet
Central corporate data processing facility
Business processes not compute-oriented
Post-Internet
Enterprise computing is highly distributed, heterogeneous, inter-enterprise (B2B)
Outsourcing becomes feasible => service providers of various sorts
Business processes increasingly computing- and data-rich

The Grid

“Resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations”

The Grid Opportunity: eScience and eBusiness
Physicists worldwide pool resources for peta-op analyses of petabytes of data
Civil engineers collaborate to design, execute, & analyze shake table experiments
An insurance company mines data from partner hospitals for fraud detection
An application service provider offloads excess load to a compute cycle provider
An enterprise configures internal & external resources to support eBusiness workload

Challenging Technical Requirements
Dynamic formation and management of virtual organizations
Online negotiation of access to services: who, what, why, when, how
Establishment of applications and systems able to deliver multiple qualities of service
Autonomic management of infrastructure elements
Open Grid Services Architecture



Reply
#18

[attachment=8750]
Grid Computing
Scenario

For years, Dr. Rayburn has been looking for tools to helphis architecture students move beyond paper sketchesand scaled-down models. He knows that as workingarchitects, they will be using computer simulations thatrequire not just design skill but proficiency with increasinglycomplex software and hardware. Unfortunately,his department cannot afford to purchase and supporta computing system with the necessary processing capacityto run such advanced applications.Over the summer, the university’s IT staff, working withthe computer science department, set up a computergrid running on the campus network. The grid connectsnearly all university-owned computers, includingthose in labs, the library, as well as faculty and staffoffices. The software that runs the grid gives local userspriority for those machines, but when they are idle,their processors can be used over the grid. Using thepower of the campus grid, Dr. Rayburn’s students cannow use sophisticated architectural design softwarethat previously was unavailable because of its processingrequirements. With the software, students candesign buildings and other structures as well as the areassurrounding them, and create three-dimensional,interactive animations of their designs. As presentations,the animations allow viewers to “fly” over andaround the scenes the students generate, zooming inand out and moving in any direction they want to go.The university’s grid supplies enough unused computingpower to process the animations fast enough for itall to function smoothly.After several weeks of using the software, two of Dr.Rayburn’s students persuade faculty in the meteorologydepartment to connect a very large climatic databaseto the grid. The database includes data about theexact positioning of the sun and moon at any latitudeon the globe during daily, monthly, and yearly cycles,as well as historical data on weather conditions formost parts of the world. With the database availableon the grid, the students can incorporate seasonalchanges into their animations. They can render a buildingat a particular latitude, at a specific time of the yearor spanning weeks or months. Dr. Rayburn sees thatwith the new capabilities, his students are able to createbetter designs, ones that make more creative useof natural light—even as seasons change—and thatdemonstrate students’ deliberation about how theirstructures interact with the environment.
What is it?
Computing grids are conceptually not unlike electrical grids. In anelectrical grid, wall outlets allows us to page link to an infrastructure ofresources that generate, distribute, and bill for electricity. When youconnect to the electrical grid, you don’t need to know where thepower plant is or how the current gets to you. Grid computing usesmiddleware to coordinate disparate IT resources across a network,allowing them to function as a virtual whole. The goal of a computinggrid, like that of the electrical grid, is to provide users withaccess to the resources they need, when they need them.Grids address two distinct but related goals: providing remoteaccess to IT assets, and aggregating processing power. The mostobvious resource included in a grid is a processor, but grids alsoencompass sensors, data-storage systems, applications, andother resources. One of the first commonly known grid initiativeswas the SETI@home project, which solicited several million volunteersto download a screensaver that used idle processor capacityto analyze data in the search for extraterrestrial life. In a morerecent example, the Telescience Project provides remote access toan extremely powerful electron microscope at the National Centerfor Microscopy and Imaging Research in San Diego. Users of thegrid can remotely operate the microscope, allowing new levels ofaccess to the instrument and its capabilities.
Who’s doing it?
Many grids are appearing in the sciences, in fields such as chemistry,physics, and genetics, and cryptologists and mathematicianshave also begun working with grid computing. Grid technology hasthe potential to significantly impact other areas of study with heavycomputational requirements, such as urban planning. Anotherimportant area for the technology is animation, which requiresmassive amounts of computational power and is a common tool ina growing number of disciplines. By making resources available tostudents, these communities are able to effectively model authenticdisciplinary practices.
How does it work?
Grids use a layer of middleware to communicate with and manipulateheterogeneous hardware and data sets. In some fields—astronomy, for example—hardware cannot reasonably be movedand is prohibitively expensive to replicate on other sites. In other
Reply
#19
[attachment=8850]
Grid Computing
Terminology

 Authentication:
– Establishing who you are
 Authorization:
– Establishing what you are allowed to do
 Assurance/accreditation
– Validating authority of a service provider
 Accounting and auditing
– Tracking, limiting and charging for resources
 Messages
– Message integrity
– Message confidentiality
 Non-repudiation
– Proof that you got the message
 Digital signature
– Assurance about the message
 Certificate authority
– A body which issues and manages security credentials
 Delegation
– Authority to act as someone else
TLS/SSL
 TLS: Transport Layer Security Protocol is the successor to SSL: Secure Socket Layer.
 Secured Sockets Layer is a protocol that transmits your communications over the Internet in an encrypted form. SSL ensures that the information is sent, unchanged, only to the server you intended to send it to.
 Lies above TCP/IP layer and below HTTP layer.
 Developed by Netscape for transmitting private documents via the Internet. SSL works by using a private key to encrypt data that's transferred over the SSL connection. Both Netscape Navigator and Internet Explorer support SSL, and many Web sites use the protocol to obtain confidential user information, such as credit card numbers. By convention, URLs that require an SSL connection start with https: instead of http:.
http://wp.netscapeeng/ssl3/
http://ietfhtml.charters/tls-charter.html
 Requires a direct transport layer between endpoints
Public Key Encryption
 Entity generates two keys, one is designated as the public key, one is the private key.
 The private key must be kept private!
 Public key is given out (eg in an X.509 certificate)
 If one key is used to encrypt a message, the other key must be used to decrypt it.
 Possession of private key (and ability to encrypt/decrypt challenge messages) proves ownership.
 Encryption method is public knowledge so does not provide data integrity or authentication of data origin
 Slower than other methods (not so good for bulk transfer or lots of small items)
 Based on belief that it is not possible to determine the decryption mechanism from the encryption mechanism.
 More secure than username/password (requires passphrase and possession of private key.
 Security relies on identify establishment.
Reply
#20
PRESENTED BY:
SAI SANDEEP TIIRLANGI
NAGURBABU CHINNAM

[attachment=9150]
ABSTRACT
Today we are in the Internet world and everyone prefers to enjoy fast access to the Internet. But due to multiple downloading, there is a chance that the system hangs up or slows down the performance that leads to the restarting of the entire process from the beginning. This is one of the serious problems that need the attention of the researchers.
So we have taken this problem for our research and in this paper we are providing a layout for implementing our proposed Grid Model that can access the Internet very fast. By using our Grid we can easily download any number of files very fast depending on the number of systems employed in the Grid. We have used the concept of Grid Computing for this purpose.
The Grid formulated by us uses the standard Globus Architecture, which is the only Grid Architecture currently used world wide for developing the Grid. And we have proposed an algorithm for laying our Grid Model that we consider as a blueprint for further implementation. When practically implemented, our Grid provides the user to experience the streak of lightening over the Internet while downloading multiple files.
Key words:
Grid Security Interface (GSI), Global Access to Secondary Storage (GASS), Monitoring and Discovery Service (MDS), Globus Resource Allocation Manager (GRAM).
CPU cycles can be efficiently used by uniting pools of servers, storage systems and networks into a single large virtual system for resource sharing dynamically at runtime. These systems can be distributed across the globe; they're heterogeneous (some PCs, some servers, maybe mainframes and supercomputers); somewhat autonomous (a Grid can potentially access resources in different organizations).
Although Grid computing is firmly ensconced in the realm of academic and research activities, more and more companies are starting to turn to it for solving hard-nosed, real-world problems.
WHAT IS GRID?
“Resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations”.
IMPORTANCE OF GRID COMPUTING
Grid computing is emerging as a viable technology that businesses can use to wring more profits and productivity out of IT resources -- and it's going to be up to you developers and administrators to understand Grid computing and put it to work It's really more about bringing a problem to the computer (or Grid) and getting a solution to that problem. Grid computing is flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources. Grid computing enables the virtualization of distributed computing resources such as processing, network bandwidth, and storage capacity to create a single system image, granting users and applications seamless access to vast IT capabilities. Just as an Internet user views a unified instance of content via the World Wide Web, a Grid user essentially sees a single, large, virtual computer.
Grid computing will give worldwide access to a network of distributed resources - CPU cycles, storage capacity, devices for input and output, services, whole applications, and more abstract elements like licenses and certificates.
For example, to solve a compute-intensive problem, the problem is split into multiple tasks that are distributed over local and remote systems, and the individual results are consolidated at the end. Viewed from another perspective, these systems are connected to one big computing Grid. The individual nodes can have different architectures, operating systems, and software versions. Some of the target systems can be clusters of nodes themselves or high performance servers.
WHY GRIDS AND WHY NOW?
• A biochemist exploits 10, 000 computers to screen 100,000 compounds in an hour
• 1,000 physicists worldwide pool resources for petaop analyses of petabytes of data
• Civil engineers collaborate to design, execute, & analyze shake table experiments
• Climate scientists visualize, annotate, & analyze terabyte simulation datasets
• An emergency response team couples real time data, weather model, population data
• A multidisciplinary analysis in aerospace couples code and data in four companies
• A home user invokes architectural design functions at an application service provider
• Scientists working for a multinational soap company design a new product
• A community group pools members’ PCs to analyze alternative designs for a local road
Why Now?
The following are the reasons why now we are concentrating on Grids:
• Moore’s law improvements in computing produce highly functional end systems
• The Internet and burgeoning wired and wireless provide universal connectivity
• Changing modes of working and problem solving emphasize teamwork, computation
• Network exponentials produce dramatic changes in geometry and geography
The network potentials are as follows:
Network vs. computer performance
• Computer speed doubles every 18 months
• Network speed doubles every 9 months
• Difference = order of magnitude per 5 years
Reply
#21
[attachment=9741]
1. ABSTRACT:
The last decade has seen a substantial increase in computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to-peer or Grid computing. Grid computing provides key infrastructure for distributed problem solving in dynamic virtual environments. It has been adopted by many scientific projects, and industrial interest is rising rapidly. However, Grids are still the domain of a few highly trained programmers with expertise in networking, high-performance computing, and operating systems.
The early efforts in Grid computing started as a project to page link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure.
2. INTRODUCTION:
Parallel supercomputers continue to increase in power and in ability to solve very large and complex problems in computational science. For many users, however, there are a number of practical limitations associated with these machines, including their high cost, the difficulty of obtaining access to them, and the difficulty of writing or procuring software tools that execute on them. In recent years, there has been a good deal of interest in alternative computing platforms known as computational grids, which are made up of large collections of geographically dispersed CPUs, storage, and visualization devices linked by local networks and the Internet. Of particular interest to the optimization community are computational grids that are made up of workstations, PCs and PC clusters, and supercomputer nodes, and which may be owned by a number of different individuals and institutions. Grids grant access to computer cycles that would not otherwise be used by the owners of the machines of which they are composed, without interfering with the computing activities of the machine owners. A key contribution of Grid computing is the potential for seamless aggregations of and interactions among computing, data, and information resources, which is enabling a new generation of scientific and engineering applications that are self-optimizing and dynamic data driven. However, achieving this goal requires a service-oriented Grid infrastructure that leverages standardized protocols and services to access hardware, software, and information resources Usually, grids provide sophisticated interfaces to distributed resources management as well as application execution and monitoring in wide and local area networks. These networks may connect thousands of computers by high-speed of up to 40 Gigabits/sec links. The computing resources include nodes made of thousands of processors, and terabytes of storage media.
Grid resources can be used to solve grand challenge problems in areas such as biophysics, chemistry, biology, scientific instrumentation, drug design, high energy physics, data mining, financial analysis, nuclear simulations, material science, chemical engineering, environmental studies, climate modeling, weather prediction, molecular biology, neuroscience/brain activity analysis, structural analysis, mechanical CAD/CAM, and astrophysics.
3. DISTRIBUTED COMPUTING:
Distributed Computing is an environment in which a group of independent and geographically dispersed computer systems take part to solve a complex problem, each by solving a part of the problem and then combining the result from all computers. It utilizes a network of many computers, each accomplishing a portion of an overall task, to achieve a computational result much more quicker than with a single computer. Distributed Computing normally refers to managing or pooling of hundreds or thousands of computer systems which individually are limited in their memory and processing power. These systems are loosely coupled systems coordinately working with a common goal.
4. THE BASICS OF GRID COMPUTING:
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electrical power grid.
Grid computing is a form of distributed computing that involves coordinating and sharing computing, application, data, storage or network resources across dynamic and geographically dispersed organizations. However the vision of a large scale resource sharing is not yet a reality in many areas as Grid computing is an evolving area of computing, while standards and technology are still being developed to enable this new paradigm.
Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate, often desktop, computers treated as a virtual cluster embedded in a distributed telecommunications infrastructure. It is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure. Grids use the resources of many separate computers connected by a network (usually the internet) to solve large-scale computation problems. They provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labour between processes. Many use the idle time on many thousands of computers throughout the world. Such arrangements permit handling of data that would otherwise require the power of expensive super computers or would have been impossible to analyze otherwise.
Grid computing has the design goal of solving problems too big for any single supercomputer, while retaining the flexibility to work on multiple smaller problems. Thus grid computing provides a multi-user environment. Its secondary aims are: better exploitation of the available computing power, and catering for the intermittent demands of large computational exercises. Grid Computing can be seen as a super set of distributed computing.
Functionally one can classify grids into several types:
• Computational Grids: which focus primarily on computationally intensive operations.
• Data Grids: which control the sharing or management of large amount of distributed data.
• Equipment Grids: which have a primary piece of equipment e.g., a telescope, and where the surrounding grid is used to control the equipment remotely and to analyze the data produced.
Many projects using grid computing are covering tasks such as protein folding, research into drugs for cancer, mathematical problems and climate models. Most of these projects work by running as a screensaver on users' personal computers, which process small pieces of the overall data while the computer is either completely idle or lightly used. These programs generally run in the background or as a screensaver when the user does not use the entire computing power of the PC. Many such projects have made progress in fields that would have otherwise taken prohibitive investment or a delay in/on results.
Reply
#22
Presented BY:
BISMITA BARIK

[attachment=9988]
Grid computing
”The Computational Grid” is analogous to Electricity (Power) Grid and the vision is to offer a (almost) dependable, consistent, pervasive, and inexpensive access to high-end resources irrespective their location of physical existence and the location of access.
HISTORY
THE TERM COMPUTING ORIGINATED IN 1990’S AS A METAPHOR FOR MAKING COMPUTER POWER AS EASY TO ACCESS AS ELECTRIC POWER GRID.
THE IDEAS OF GRID WERE BROUGHT TOGETHER BY IAN FOSTER,CARL KELESSMAN,AND STEVE TEUCKER WIDELY REGARDED AS “””FATHER OF GRID””””
Scalable HPC: Breaking Administrative Barriers
WHY IS IT USED???????????????
USING A GRID
Title

A Typical Grid Computing Environment
ARCHITECTURE

A Layered View of a Grid
Computers, supercomputers, storage devices, instruments …
FEATURES……………
Grid computing is driven by five big areas:
Secure access: Trust between resource providers and users is essential, especially when they don't know each other. Sharing resources conflicts with security policies in many individual computer centers, and on individual PCs, so getting grid security right is crucial
Resource sharing: Global sharing is the very essence of grid computing.
Resource use: Efficient, balanced use of computing resources is essential.
The death of distance: Distance should make no difference: you should be able to access to computer resources from wherever you are.
Open standards: Interoperability between different grids is a big goal, and is driven forward by the adoption of open standards for grid development, making it possible for everyone can contribute constructively to grid development. Standardization also encourages industry to invest in developing commercial grid services and infrastructure.
How does a Grid Service work?
Client uses a Grid service interface
A grid service instance is created from a Factory with the help of a Registry
The grid service instances run with appropriate resources automatically allocated
New instances can allocated and destroyed dynamically, to benefit performance
Example: A web serving environment could dynamically allocate extra instances to provide consistent user response time
Simple Invocation Example
Grid: Towards Internet Computing for (Coordinated) Resource Sharing
Drug Design: Data Intensive Computing on Grid
It involves screening millions of chemical compounds (molecules) in the Chemical DataBase (CDB) to identify those having potential to serve as drug candidates.
Title
Main components
Title

REAL GRIDS……………………
Typical current grid
Virtual organisations negotiate with sites to agree access to resources
Grid middleware runs on each shared resource to provide
Data services
Computation services
Single sign-on
Distributed services (both people and middleware) enable the grid
Grid initiatives
Reply
#23
[attachment=10407]
Abstract
Grid computing, emerging as a new paradigm for next-generation computing, enables the sharing, selection, and aggregation of geographically distributed heterogeneous resources for solving large-scale problems in science, engineering, and commerce. The resources in the Grid are heterogeneous and geographically distributed. Availability, usage and cost policies vary depending on the particular user, time, priorities and goals. It enables the regulation of supply and demand for resources.
It provides an incentive for resource owners to participate in the Grid; and motivates the users to trade-off between deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economic-based systems for wide-area parallel and distributed computing by developing users’ quality-of-service requirements-based scheduling strategies, algorithms, and systems. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep—task and data parallel—applications.
This paper focuses on introduction, grid definition.It covers about grid characteristics, types of grids and an example describing a community grid model. It gives an overview of grid tools, various components, advantages followed by conclusion.
1. INTRODUCTION:
This The Grid unites servers and storage into a single system that acts as a single computer - all your applications tap into all your computing power. Hardware resources are fully utilized and spikes in demand are met with ease. This Web site sponsored by Oracle brings you the resources you need to evaluate your organization's adoption of grid technologies. The Grid is ready when you are.
2. THE GRID:
The Grid is the computing and data management infrastructure that will provide the electronic underpinning for a global society in business, government, research, science and entertainment, integrate networking, communication, computation and information to provide a virtual platform for computation and data management in the same way that the Internet integrates resources to form a virtual platform for information. The Grid is the computing and data management infrastructure that will provide the electronic. Grid infrastructure will provide us with the ability to dynamically page link together resources as an ensemble to support the execution of large-scale, resource-intensive, and distributed applications.
Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed "autonomous" resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements.
What Grid Can Do?
Exploiting underutilized resources:
In most organizations, there are large amounts of underutilized computing resources. Most desktop machines are busy less than 5 percent of the time. In some organizations, even the server machines can often be relatively idle. Grid computing provides a framework for exploiting the underutilized resources and thus has the possibility of substantially increasing the efficiency of resource usage. Another function of the grid is to better balance resource utilization. An organization may have occasional unexpected peaks of activity that demand more resources. If the applications are grid-enabled, they can be moved to underutilized machines during such peaks. In fact, some grid implementations can migrate partially completed jobs. In general, a grid can provide a consistent way to balance the loads on a wider federation of resources. This applies to CPU, storage, and many other kinds of resources that may be available on a grid.
Parallel CPU capacity
The potential for massive parallel CPU capacity is one of the most attractive features of a grid. In addition to pure scientific needs, such computing power is driving a new evolution in industries such as the bio-medical field, financial modeling, oil exploration, motion picture animation, and many others.
The common attribute among such uses is that the applications have been written to use algorithms that can be partitioned into independently running parts. A CPU intensive grid application can be thought of as many smaller “sub jobs,” each executing on a different machine in the grid. To the extent that these sub jobs do not need to communicate with each other, the more “scalable” the application becomes. A perfectly scalable application will, for example, finish 10 times faster if it uses 10 times the number of processors. Barriers often exist to perfect scalability. The first barrier depends on the algorithms used for splitting the application among many CPUs. If the algorithm can only be split into a limited number of independently running parts, then that forms a scalability barrier. The second barrier appears if the parts are not completely independent; this can cause contention, which can limit scalability. For example, if all of the sub jobs need to read and write from one common file or Database, the access limits of that file or database will become the limiting factor in the application’s scalability.
Reply
#24
Wink 
You all Guys make My M.Tech Research Work Damn Easy...Thank you all from Bottom of my heart...!

Really Worth Appreciable work done by you al Smile) Smile
Reply
#25
Presented by:
Priyanka Sainik

Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Although a grid can be dedicated to a specialized application, it is more common that a single grid will be used for a variety of different purposes. Grids are often constructed with the aid of general-purpose grid software libraries known as middleware.
Grid size can vary by a considerable amount. Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks. Furthermore, “distributed” or “grid” computing, in general, is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: 41893,
Popular Searches: electrical grid security seminar, grid computig seminar report, grid computing seminar ppt, seminar report on grid computing, allinterviewcom sas, longhorn fashions, zend frameworks,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  graphics processing unit seminars report Information Technology 7 16,355 02-11-2012, 04:02 PM
Last Post: seminar details
  Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing seminar class 1 3,089 29-10-2012, 05:31 PM
Last Post: seminar details
  Types of Distributed Computing computer girl 1 1,594 30-08-2012, 03:00 PM
Last Post: elizabeth35
  information technology seminars topics computer science technology 4 72,220 11-02-2012, 12:07 PM
Last Post: seminar addict
  Modular Computing computer science crazy 2 4,008 27-01-2012, 09:45 AM
Last Post: seminar addict
  AMAZON ELASTIC CLOUD COMPUTING seminar class 1 2,761 20-01-2012, 10:12 AM
Last Post: seminar addict
  JavaRing seminars report seminar projects crazy 3 12,240 07-01-2012, 12:20 PM
Last Post: project uploader
  Unicode And Multilingual Computing computer science crazy 8 6,335 30-07-2011, 09:58 AM
Last Post: glitson
  History of Computing seminar class 0 1,454 05-04-2011, 09:45 AM
Last Post: seminar class
  MOBILE AGENTS FOR RESOURCES ALLOCATION AND COORDINATION IN GRID seminar surveyer 0 1,512 01-01-2011, 01:27 PM
Last Post: seminar surveyer

Forum Jump: