Dispatcher: Botnet In
#2
[attachment=10810]
Abstract
Cloud computing is offering utility oriented IT services to users world wide. It enables hosting of applications from consumer,scientific and business domains. However data centres hosting cloud computing applications consume huge amounts of energy,contributing to high operational costs and carbon footprints to the environment. With energy shortages and global climate change leading our concerns these days, the power consumption of data centers has become a key issue. Therefore,we need green cloud computing solutions that can not only save energy,but also reduce operational costs. The vision for energy efficient management of cloud computing environments is presented here. A green scheduling algorithm which works by powering down servers when they are not in use is also presented.
Introduction
In 1969, Leonard Kleinrock , one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) which seeded the Internet, said: “As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of „computer utilities which, like present electric and telephone utilities, will service individual homes and offices across the country.” This vision of computing utilities based on a service provisioning model anticipated the massive transformation of the entire computing industry in the 21st century whereby computing services will be readily available on demand, like other utility services available in today’s society. Similarly, users (consumers) need to pay providers only when they access the computing services. In addition, consumers no longer need to invest heavily or encounter difficulties in building and maintaining complex IT infrastructure.
In such a model, users access services based on their requirements without regard to where the services are hosted. This model has been referred to as utility computing, or recently as Cloud computing . The latter term denotes the infrastructure as a “Cloud” from which businesses and users can access applications as services from anywhere in the world on demand. Hence, Cloud computing can be classified as a new paradigm for the dynamic provisioning of computing services supported by state-of-the-art data centers that usually employ Virtual Machine (VM) technologies for consolidation and environment isolation purposes . Many computing service providers including Google, Microsoft, Yahoo, and IBM are rapidly deploying data centers in various locations around the world to deliver Cloud computing services.
Cloud computing delivers infrastructure, platform, and software (applications) as services, which are made available to consumers as subscription-based services under the pay-as-you-go model. In industry these services are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) respectively. A recent Berkeley report stated “Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service”.
Clouds aim to drive the design of the next generation data centers by architecting them as networks of virtual services (hardware, database, user-interface, application logic) so that users can access and deploy applications from anywhere in the world on demand at competitive costs depending on their QoS (Quality of Service) requirements .
Need of Cloud Computing
The need of cloud computing can be explained with the help of an example. The following graph shows the number of users who log on to the Australian Open web page.
The spikes correspond to the month of January during which the tournament is going on. The site remains almost dormant during the rest of the year. It would be wasteful to have servers which can cater to the maximum need,as they wont be needed during the rest of the year. The concept of cloud computing comes to the rescue at this time. During the peak period, cloud providers such as Google,Yahoo,Microsoft etc.can be approached to provide the necessary server capacity.
In this case, Infrastucture is provided as a service(IaaS) through cloud computing. Likewise,cloud providers can be approached for obtaing software or platform as a service. Developers with innovative ideas for new Internet services no longer require large capital outlays in hardware to deploy their service or human expense to operate it . Cloud computing offers significant benefits to IT companies by freeing them from the low-level task of setting up basic hardware and software infrastructures and thus enabling focus on innovation and creating business value for their services.
Green Computing
Green computing is defined as the atudy and practice of designing , manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment." The goals of green computing are similar to green chemistry; reduce the use of hazardous materials, maximize energy efficiency during the product's lifetime, and promote the recyclability or biodegradability of defunct products and factory waste. Research continues into key areas such as making the use of computers as energy-efficient as possible, and designing algorithms and systems for efficiency-related computer technologies.
There are several approaches to green computing,namely
• Product longetivity
• Algorithmic efficeincy
• Resource allocation
• Virtualisation
• Power management etc.
Need of green computing in clouds
Modern data centers, operating under the Cloud computing model are hosting a variety of applications ranging from those that run for a few seconds (e.g. serving requests of web applications such as e-commerce and social networks portals with transient workloads) to those that run for longer periods of time (e.g. simulations or large data set processing) on shared hardware platforms. The need to manage multiple applications in a data center creates the challenge of on-demand resource provisioning and allocation in response to time-varying workloads. Normally, data center resources are statically allocated to applications, based on peak load characteristics, in order to maintain isolation and provide performance guarantees. Until recently, high performance has been the sole concern in data center deployments and this demand has been fulfilled without paying much attention to energy consumption. The average data center consumes as much energy as 25,000 households [20]. As energy costs are increasing while availability dwindles, there is a need to shift focus from optimising data center resource management for pure performance to optimising for energy efficiency while maintaining high service level performance. According to certain reports,the total estimated energy bill for data centers in 2010 is $11.5 billion and energy costs in a typical data center double every five years.
Data centers are not only expensive to maintain, but also unfriendly to the environment. Data centers now drive more in carbon emissions than both Argentina and the Netherlands . High energy costs and huge carbon footprints are incurred due to massive amounts of electricity needed to power and cool numerous servers hosted in these data centers. Cloud service providers need to adopt measures to ensure that their profit margin is not dramatically reduced due to high energy costs. For instance, Google, Microsoft, and Yahoo are building large data centers in barren desert land surrounding the Columbia River, USA to exploit cheap and reliable hydroelectric power . There is also increasing pressure from Governments worldwide to reduce carbon footprints, which have a significant impact on climate change. For example, the Japanese government has established the Japan Data Center Council to address the soaring energy consumption of data centers . Leading computing service providers have also recently formed a global consortium known as The Green Grid to promote energy efficiency for data centers and minimise their environmental impact.
Lowering the energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. Green Cloud computing is envisioned to achieve not only efficient processing and utilisation of computing infrastructure, but also minimise energy consumption. This is essential for ensuring that the future growth of Cloud computing is sustainable. Otherwise, Cloud computing with increasingly pervasive front-end client devices interacting with back-end data centers will cause an enormous escalation of energy usage. To address this problem, data center resources need to be managed in an energy-efficient manner to drive Green Cloud computing. In particular, Cloud resources need to be allocated not only to satisfy QoS requirements specified by users via Service Level Agreements (SLA), but also to reduce energy usage.
Architecture of a green cloud computing platform
Figure 2 shows the high-level architecture for supporting energy-efficient service allocation in Green Cloud computing infrastructure. There are basically four main entities involved:
a) Consumers/Brokers: Cloud consumers or their brokers submit service requests from anywhere in the world to the Cloud. It is important to notice that there can be a difference between Cloud consumers and users of deployed services. For instance, a consumer can be a company deploying a Web application, which presents varying workload according to the number of users accesing it.
b) Green Resource Allocator: Acts as the interface between the Cloud infrastructure and consumers. It requires the interaction of the following components to support energy-efficient resource management:
• Green Negotiator: Negotiates with the consumers/brokers to finalize the SLA with specified prices and penalties (for violations of SLA) between the Cloud provider and consumer depending on the consumer’s QoS requirements and energy saving schemes. In case of Web applications, for instance, QoS metric can be 95% of requests being served in less than 3 seconds.
• Service Analyser: Interprets and analyses the service requirements of a submitted request before deciding whether to accept or reject it. Hence, it needs the latest load and energy information from VM Manager and Energy Monitor respectively.
• Consumer Profiler: Gathers specific characteristics of consumers so that important consumers can be granted special privileges and prioritised over other consumers.
• Pricing: Decides how service requests are charged to manage the supply and demand of computing resources and facilitate in prioritising service allocations effectively.
• Energy Monitor: Observes and determines which physical machines to power on/off.
• Service Scheduler: Assigns requests to VMs and determines resource entitlements for allocated VMs. It also decides when VMs are to be added or removed to meet demand.
• VM Manager: Keeps track of the availability of VMs and their resource entitlements. It is also in charge of migrating VMs across physical machines.
• Accounting: Maintains the actual usage of resources by requests to compute usage costs. Historical usage information can also be used to improve service allocation decisions.
c) VMs: Multiple VMs can be dynamically started and stopped on a single physical machine to meet accepted requests, hence providing maximum flexibility to configure various partitions of resources on the same physical machine to different specific requirements of service requests. Multiple VMs can also concurrently run applications based on different operating system environments on a single physical machine. In addition, by dynamically migrating VMs across physical machines, workloads can be consolidated and unused resources can be put on a low-power state, turned off or configured to operate at low-performance levels (e.g., using DVFS) in order to save energy.
d) Physical Machines: The underlying physical computing servers provide hardware infrastructure for creating virtualised resources to meet service demands.
Making cloud computing more green
Mainly three approaches have been tried out to make cloud computing environments more environmental friendly. These approaches have been tried out in the data centres under experimental conditions. The practical application of these methods are still under study. The methods are:
Dynamic Voltage frequency scaling technique(DVFS):- Every electronic circutory will have an operating clock associated with it. The operatin frequency of this clock is adjusted so that the supply voltage is regulated. Thus, this method heavily depends on the hardware and is not controllabale according to the varying needs. The power savings are also low compared to other approaches. The power savings to cost incurred ratio is also low.
Resource allocation or virtual machine migration techniques:- In a cloud computing environment,every physical machine hosts a number of virtual machines upon which the applications are run. These virtual machines can be transfered across the hosts according to the varying needs and avaialble resources.The VM migration method focusses on transferring VMs in such a way that the power increase is least. The most power efficient nodes are selected and the VMs are transfered across to them. This method is dealt in detail later.
Algorithmic approaches:- It has been experimently determined that an ideal server consumes about 70% of the power utilised by a fully utilised server. (See figure 3).
Using a neural network predictor,the green scheduling algorithms first estimates required dynamic workload on the servers. Then unnecessary servers are turned off in order to minimize the number of running servers, thus minimizing the energy use at the points of consumption to provide benefits to all other levels. Also,several servers are added to help assure service-level agreement. The bottom line is to protect the environment and to reduce the total cost of ownership while ensuring quality of service.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: botnet for mirc designing of ic 565, aircraft dispatcher jobs, anti botnet freewareon, botnet for mirc, botnets, runescape emulation bot, botnet fix,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
Dispatcher: Botnet In - by Wifi - 30-10-2010, 12:17 PM
RE: Dispatcher: Botnet In - by seminar class - 23-03-2011, 11:36 AM

Forum Jump: