10-03-2012, 04:40 PM
Study and Evaluating Cloud Computing Services
[attachment=18190]
INTRODUCTION
Cloud computing present’s tangible benefits to businesses. These can be both immediate and vast ranging, from reductions in cost of ownership to location independence. However along with these potential benefits comes a new set of concerns; security, privacy, availability, performance and integrity. Suitable testing must be at the core of any Cloud solution to ensure the delivery of a safe, integrated solution which meets the needs of the business it is to serve.
The purpose of this Seminar is to understand the benefits and concerns of a Cloud Computing solution and how suitable testing can aid in realizing the full potential of your investment.
1.1 SaaS: Software-as-a-Service
Typically, Software as a Service (SaaS) would be considered to be a type of cloud computing. Software is held centrally, not by local machines. This is presented to the user on an ‘on demand’ basis usually by means of virtualization. Central control of the application is retained allowing for reduction in licensing, implementation and ongoing maintenance costs. The delivery route in this instance is the ‘Cloud’ this being the general term for the Internet. The term ‘Cloud’ is used to describe networks and infrastructure which are not visible to the user, a potentially huge network black box.
1.2 PaaS: Platform-as-a-Service
Another common example of Cloud Computing is Platform as a Service (PaaS). PaaS can be considered as the next step in the SaaS model, where the on demand delivery is not simply the specific item of software required, but the users’ platform, thus allowing centralized control of the usage of each machine on the PaaS network. Again the delivery route in this model is the ‘Cloud.’
1.3 IaaS: Infrastructure-as-a-Service
Virtualisation technologies have also facilitated the realization of a new models such as as Cloud Computing or IaaS. The main idea is to supply users with on-demand access to computing or storage resources and charge fees for their usage. In these models, users pay only for the resources they utilise. A key provider of this type of on-demand infrastructure is Amazon Inc. with its Elastic Compute Cloud (EC2). EC2 allows users to deploy VMs on Amazon's infrastructure, which is composed of several data centers located around the world. To use Amazon's infrastructure, users deploy instances of pre-submitted VM images or upload their own VM images to EC2. The EC2 service utilises the Amazon Simple Storage Service (S3), which aims at providing users with a globally accessible storage system. S3 stores the users' VM images and, as EC2, applies fees based on the size of the data and the storage time [2] .
LITERATURE SURVEY
The Cloud is a term that borrows from telephony. Up until the ’90s data circuits (including those that carried Internet traffic) were hard-wired between destinations. In the ’90s long haul telephone companies began offering Virtual Private Network service for data communications. The telephone companies were able to offer these VPN based services with the same guaranteed bandwidth as fixed circuits at a lower cost because they maintained the ability to switch traffic to balance utilization as they saw fit, thus utilizing their overall network bandwidth more effectively. As a result of this arrangement it was thus impossible to determine in advance precisely what the path was going to be. The term "telecom cloud" was used to describe this type of networking. Cloud Computing is very similar. Cloud computing relies heavily on virtual machines (VMs) that are spawned on demand to meet the user’s needs. Because these virtual instances are spawned on demand, it is impossible to determine how many such VMs are going to be running at any given time. As these VMs can be spawned on any given computer as conditions demand, they are location in-specific as well, much like a cloud network. A common depiction in network diagrams is a cloud outline.
3.2 Implementation of File System
The file system is the key component of the system to support massive data storage and management. The designed TFS is a scalable, distributed file system, and each TFS cluster consists of a single master and multiple chunk servers and can be accessed by multiple client.
3.3 TFS Architecture
In TFS, files are divided into variable-size chunks. Each chunk is identified by an immutable and globally unique 64 bit chunk handle assigned by the master at the time of chunk creation. Chunk servers store the chunks on the local disks and read/write chunk data specified by a chunk handle and byte range. For the data reliability,each chunk is replicated on multiple chunk servers. By default, we maintain three replicas in the system, though users can designate different replication levels for different files. The master maintains the metadata of file system, which includes the namespace, access control information,