Dynamic Data Reallocation in Hybrid Disk Arrays
#1

Dynamic Data Reallocation in Hybrid Disk Arrays
A Seminar Report
by
Simy Antony
M105116
Department of Computer Science & Engineering
College of Engineering Trivandrum
Kerala - 695016
2010-11


[attachment=8552]

Abstract
Current disk arrays consist purely of hard disk drives, which normally provide huge storage
capacities with low cost and high throughput for data-intensive applications. Nevertheless, they
have some inherent disadvantages such as long access latencies and energy ineciency due to
their build-in mechanical mechanisms. Flash-memory-based solid state disks, on the other hand,
although currently more expensive and inadequate in write cycles, o er much faster random
read accesses and are much more robust and energy ecient. To combine the complementary
merits of hard disks and
ash disks, this paper proposes a hybrid disk array architecture named
hybrid disk storage (HIT) for data-intensive applications. Next, a dynamic data redistribution
strategy called performance, energy, and reliability balanced (PEARL), which can periodically
redistribute data between
ash disks and hard disks to adapt to the changing data access
patterns, is developed on top of the HIT architecture. Comprehensive simulations using real-life
traces demonstrate that compared with existing data placement techniques, PEARL exhibits its
strength in both performance and energy consumption without impairing
ash disk reliability.
Keywords: Allocation strategies, reliability, secondary storage.
4
1 Introduction
DATA placement problem or le assignment problem (FAP), the problem of allocating data
(e.g., a set of les) onto multiple disks prior to serving I/O requests so that some cost functions
or performance metrics can be optimized, has been extensively investigated in the past years.
While common cost functions are communication costs, storage costs, and queuing costs, pop-
ular performance metrics include mean response time and overall system throughput. Existing
algorithms for the data placement problem generally can be divided into two categories: static
and dynamic. Static data placement algorithms require a prior knowledge about the workload
characteristics including the service time of each I/O request and the access rate of each le.
In addition, they demand that all les need to be allocated at the same time. Dynamic data
placement algorithms on the other hand, do not need any prior statistics of workload features.
Also, they can allocate dynamically created les. The principle idea of dynamic data placement
algorithms is to use historic information of arrived les and their recorded access characteristics
to make a good allocation for each arriving le so that load balancing among multiple disks can
be maintained. Since they do not possess the global information about data and their access
statistics, their performance is understandably inferior to that of their static brethren.
Current disk arrays consist purely of energy-inecient hard disk drives. Hence, a novel stor-
age architecture that not only o ers high performance but also saves energy is needed.Originally,

ash memory was devised as a storage element for consumer electrical devices such as digital
cameras, cellular telephones, and personal digital assistants. Current
ash-memory-assisted
hard disk storage systems are mainly proposed to be applied in mobile platforms like personal
laptops or embedded systems. The major purpose of using
ash memory is to save energy,
which is critical in a mobile computing environment or for an embedded system. Essentially,
these
ash memory and hard disk mixed storage systems only take
ash memory as an extra
layer of cache bu er. Very recently,
ash-memory-based solid state disk (hereafter,
ash disk)
started to replace traditional hard disk drives in sever-class storage systems in enterprises such
as Google and Baidu.
Flash disks have the following apparent advantages, which make them ideal storage devices
for enterprise applications. First, they inherently consume much less energy than mechanical
mechanism-based hard disks. Second, due to their solid-state design, they are free of mechanical
movements, and thus, have enhanced reliability. Third, they o er much faster random access
by eliminating unnecessary seek time delays and rotation latencies. Still, on-market
ash disks
also have some obvious disadvantages such as small capacity, slow random write speeds, and
limited erasure cycles. Therefore, it is wise to integrate small capacity
ash disks with high-
capacity hard disk drives to form a hybrid disk array so that their complementary merits can
be bene ted by enterprise applications. To this end, this paper propose a novel hybrid disk
storage (HIT) architecture for next generation server-class disk arrays. As for performance and
reliability, a new data management scheme is needed to judiciously utilize the complementary
merits of
ash disk and hard disk so that storage systems for data-intensive applications like
online transaction processing (OLTP) can achieve a high performance and reliability level.
This paper depicts dynamic data redistribution problem in the context of the new HIT
architecture. An innovative dynamic data redistribution strategy called performance, energy,
and reliability balanced (PEARL), which periodically redistributes a set of data based on their
access characteristics and the distinct features of hard disk and
ash disk, is developed on top
of the HIT architecture. Considering both data access characteristics and features of di erent
5
Figure 1: The HIT architecture
types of disks (
ash or hard), PEARL intelligently redistributes data to its right place (a
ash
disk or a hard disk), where the requests targeting on it can be served eciently.
6
2 RELATED WORK
2.1 Dynamic Data Redistribution
Compared with numerous static data placement algorithms, only very few investigations on
dynamic data allocation and redistribution (or reallocation) problem have been accomplished.
Arnan et al. observed that when several workloads on the same disk are concurrently active,
the disk head has to seek back and forth between their respective locations. Obviously, frequent
disk head seeking signi cantly increases the response times of I/O requests. To alleviate this
problem in multidisk systems, they proposed a data reallocation approach, which separates
interfering workloads by moving one or several of the workloads to other disks where they will
cause less interference. The data reallocation activities are guided by a conservative interference
estimation model named independent reference model (IRM). Their algorithm concentrates on
reducing seek times without taking dynamically changing access patterns into account.
Scheuermann et al. proposed an array of heuristic algorithms for dynamic data redistribution
by taking access pattern changes into consideration. The basic idea of their algorithms is to
minimize the queuing delays by distributing the load across the disks as evenly as possible and
by selectively redistributing the load dynamically through a technique called disk-cooling.
2.2 Flash Disk: A Buddy of Hard Disk
NAND
ash memory is traditionally used as an extra cache bu er. For example, a hybrid
hard disk model, which embeds
ash memory chips into a hard disk to make a hybrid disk,
was proposed by Microsoft . It takes
ash memory chips as on-board memory bu ers. Another
typical example is SmartSaver, a disk energy-saving scheme for a mobile computer proposed by
Chen et al. This scheme uses a
ash drive as a standby bu er for caching and prefetching data
to serve requests without disturbing disks. With the advances of
ash memory technology,
ash
disks are integrated into personal computers or even enterprise level applications. Kim et al.
developed an energy-ecient le placement technique named pattern based PDC (PB-PDC)
which adapts the existing Popular Data Concentration (PDC) algorithm by separating read
and write I/O requests. PB-PDC locates all read-only data upon a
ash drive while it puts all
rest data on a hard disk. The PB-PDC technique only concentrated on one
ash drive with
a single hard disk in a mobile laptop computing environment. In addition, it did not take
changing workload patterns into account.
2.3 Basic Idea
The basic idea of the HIT architecture is to construct enterprise-level disk arrays using both
small-capacity nonvolatile solid-state
ash disks and traditional serverclass hard disk drives.
Each
ash disk is coupled with its corresponding buddy hard disk to form a
ash-hard disk
pair. Meanwhile, all hard disks form a hard disk array organized in some RAID structure. The
number of
ash disks in the
ash disk array is equal to the number of hard disks in the hard
disk array.
Compared with existing data placement techniques, that also employ both hard disks and

ash devices, PEARL has some desirable features. First, it can be applied in a variety of
applications such as Web server and OLTP. Second, it targets on a disk array-based server-
class storage system rather than a single
ash device. PEARL rst divides the hard disk array
7
into multiple data zones (hereafter, zone). Each zone is a contiguous data area that contains
the same number of blocks with each block being 512 bytes. It then monitors the access pattern
of each zone. Data can be dynamically created or deleted. In addition, the access pattern of
each zone could vary over time. Initially, all data including newly created ones are distributed
across the hard disk array in a RAID-X manner (e.g., X could be 0 or 5, see Fig. 1).
At the end of each epoch, after obtaining statistics of each zone's access pattern, PEARL
rst separates all zones into three categories: write-excessive, read-exclusive, and read-write. If
the frequency of a zone's write accesses exceeds the
ash disk write cycle threshold value, it will
be de ned as a write-excessive zone and will stay on the hard disk array. All zones that do not
belong to the write excessive category will be further divided into two groups: read-exclusive
and read-write. Zones with both read and write accesses are in the read-write group, whereas
zones with read-only accesses go into the read-exclusive group. Next, PEARL selects a set of
zones that is appropriate for being allocated on the
ash disk array from the read exclusive and
read-write groups based on each zone's popularity and performance energy trade-o parameter.
And then, it reallocates these zones onto the
ash disks. When data access pattern changes,
PEARL redistributes zones between the
ash disk array and the hard disk array accordingly.
8
3 HIT ARCHITECTURE
There are two types of
ash memory: NAND
ash memory and NOR
ash memory. Also,
there are two options when one implements a
ash-memory-based storage system: emulating a

ash disk as a block-device like a hard disk or designing a brand new native le system directly
over
ash disks.
In order to integrate a
ash disk into an existing storage system, two important layers of
software modules that sit between the le system and the
ash disk are indispensable. They are
memory technology device (MTD) driver and
ash translation layer (FTL) driver. Lower level
functions of a storage medium such as read, write, and erase are provided by the MTD driver.
Supported by the underlying lower level functions o ered by the MTD driver, the FTL driver
implements higher level algorithms like wear leveling, garbage collection, and physical/logical
address translation. With the assistance of the FTL driver, the le system can access the
ash
disk as a hard disk without being aware of the existence of the
ash disk.
Each
ash disk cooperates with a hard disk through a dedicated high bandwidth connection
to compose a
ash-hard disk pair. The rationale behind the one-to-one disk pair con guration
is threefold. First, the equal number of the two types of disks makes balancing load between
the hard disk array and the
ash disk array easier. Second, it simpli es data redistribution
between the two disk arrays. Last but not least, it enhances storage system's fault tolerance
and reliability by reducing disk reconstruction time when a hard disk or a
ash disk fails. For
example, when a hard disk fails, its partner
ash disk can largely help the recovery of the failed
hard disk.
In addition, both hard disks and
ash disks are directly attached to the system bus. All hard
disks are organized in a RAID structure like RAID-0. The hard disk array plus its associated

ash disks construct a hybrid disk array. Since all
ash disks are emulated as hard disks, from
the hybrid disk array controller point of view, there exist two groups of same type disks in the
hybrid disk array. Within the hybrid disk array controller, some data management modules
like PEARL are implemented to manage data across the hybrid disk array and the controller
caches. The hybrid disk array controller is connected to the storage server host through the
host channel. Multiple hybrid disk arrays each with its own disk array controller can be
connected with the storage server processor simultaneously. PEARL consists of ve modules,
Data Placement Initializer (DPI), Redistribution Table Generator (RTG), Data reOrganizer
(DRO), Access Monitor (AM), and Disk Space Manager (DSM).
9
4 THE PEARL STRATEGY
4.1 Algorithm Description
The PEARL strategy judiciously yet dynamically designates each zone as either
ash favorite
or hard favorite based on its I/O access characteristics. Each zone is then allocated onto its
favorite disk array so that the complementary merits of
ash disks and hard disks can be mostly
utilized while their respective disadvantages can be avoided.
PEARL exploits two critical I/O workload characteristics: data access locality and data ac-
cess type. For example, it is well known that 10 percent of les accessed on a Web server
account for 90 percent of the server requests and 90 percent of the bytes transferred .The impli-
cation of workload locality is that the overall system performance can be noticeably improved
if the I/O requests on the small percentage popular data can be served more eciently. Data
access locality suggests concentrate on the redistribution of the minority popular data.
The second important I/O workload characteristic is data access type, namely write-excessive,
read-exclusive, and readwrite. It is understood that read-exclusive les are suitable for
ash
disks as they don't contribute any erasure cycles. Further, accessing these read-exclusive les on

ash disk can signi cantly save energy and gain potential performance enhancement due to no
seek time and rotation latency any more. Similarly, write-excessive les are more appropriate
for hard disks, where erasure cycle limitation doesn't apply.
The most dicult task for a data redistribution strategy is to decide where some read-
write popular zones should go. Unlike existing conservative algorithms such as PB-PDC which
immediately puts all read-write les onto hard disks to avoid any write cycles on
ash disk,
PEARL adopts a more open attitude and makes a smart decision based on a good trade-o
between performance and energy saving.
Fig. 2 shows an illustrative example of data redistribution between hard disks and
ash disks
using the PEARL strategy. Assume that there are only two hard disks and two
ash disks in
the hybrid disk array. Further, assume that the capacity of a
ash disk is half of that of a hard
disk for illustration purpose. Initially, data were allocated across the hard disk array in some
RAID structure and the
ash disks are empty. PEARL divides the hard disk array into four
zones and the size of each zone is 12 blocks. Each block is 512 bytes and has its Logical Block
Address (LBA). While the hard disk array was logically divided into four equal-size zones, the

ash disk array was partitioned into two same-size zones as well.
Immediately after the time instance 0, the AM starts to monitor the popularity (in terms
of number of accesses) for each zone on the hard disk array. Assume that Zone 2 and Zone 3
are the hottest zones during the rst epoch and they are all
ash favorite. Thus, at the end of
the rst epoch, the RTG module generates a data reorganization table (DRT) and hands it to
the DRO, which, in turn, transfers data of Zone2 and Zone3 from the hard disk array to the

ash disk array. Therefore, all subsequent accesses targeting on the data of the two migrated
zones will be directed to the
ash disk array during the second epoch. Similarly, at the end
of the second epoch, AM discovers that the hottest two zones change from (Zone2, Zone3) to
(Zone2, Zone4). As a consequence, DRO rst transfers data in Zone3 back to the hard disk
array, and then, transfers data in Zone 4 to the
ash disk array. By periodically updating
popular
ash-favorite data in the
ash disk array, PEARL dynamically separates data into two
10
Figure 2: An illustrative example of data redistribution by PEARL during the rst two epochs
disk arrays so that requests on di erent types of data can be served more eciently by distinct
types of disks.
4.2 System Models
The set of zones is represented as Z = fz1; :::; zi; :::; zmg.The
ash disk array is modeled as
FD = ffd1; :::; fdj ; :::; fdng, whereas the hard disk array is denoted by HD = fhd1; :::; hdj ; :::; hdng.
n is the number of disks. Let bl denote the size of a block in megabyte and it is assumed to
be a constant in the system. A zone zi (zi belongs to Z) is modeled as a set of rational pa-
rameters, i.ezi = (si; riwi); where si is the zone's size in terms of number of blocks, ri is the
zone's read access rate (1/second), and wi is the zone's write access rate (1/second). Each hard
disk's transfer rate is th (Mbyte/second). Its active energy consumption rate and idle energy
consumption rate are ph (Watts) and ih (Watts), respectively. Similarly, each
ash disk is
modeled as Fdj = (rf ;wf ; pf ; if );where rf is its read rate (Mbyte/second), wf is its write rate
Mbyte/second), pf is its read/write power consumption rate (Watts), and if is its idle energy
consumption rate (Watts). Although
ash disks consume di erent amount of energy during
di erent operations and data write patterns, here adopted a constant active energy consump-
tion rate pf because it can be viewed as an average case. SK denotes average seeking time of
a hard disk and RT represents average rotation latency of a hard disk. The time span of one
epoch is denoted by Te (second). Therefore, the mean service time of a block of data in zone
zi served by a hard disk is
msth
i = SK + RT + bl=th
If the block is served by a
ash disk, mean service time is
mstf
i = [(riTe=si)  (bl=rf ) + (wiTe=si)  (bl=wf )]=[(ri + wi)Te=si]
= bl(ri=rf + wi=wf )=(ri + wi)
where riTe and wiTe are number of reads and writes in a zone in one epoch, respectively.
Hence, the performance gain pgi in terms of mean service time reduction ratio of zone zi is
pgi = msth
i =mstf
i = (SK + RT + bl=th)(ri + wi)=bl(ri=rf + wi=wf )
For each read-write zone, need to decide where to store it. Therefore calculate its energy
gain egi in one epoch, where ech
i is the energy consumption of a block in zone z in one epoch if
it is stored in the hard disk array, and ecf i is the energy consumption of the block in zone zi
in one epoch if it is in the
ash disk array:
11
Figure 3: The PEARL strategy
egi = echi=ecfi
= [msth
i  ph  (riTe + wviTe)=si]=[mstf
i  pf  (riTe + wiTe)=si] = (msth
i  ph)=(mstf
i  pf )
The performance energy ratio of zone zi is de ned as
peri = (egi
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: arrays for 3rd graders, hybrid hard disk pdf, dynamic data reallocation in hybrid disk arrays, fdj resultats keno, sssid ujjain zone4 ward52, transistor arrays project topics, who is pearl sponge,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Block Chain and Data Science jntuworldforum 0 8,040 06-10-2018, 12:15 PM
Last Post: jntuworldforum
  Data Encryption Standard (DES) seminar class 2 9,353 20-02-2016, 01:59 PM
Last Post: seminar report asees
  Skin Tone based Secret Data hiding in Images seminar class 9 7,016 23-12-2015, 04:18 PM
Last Post: HelloGFS
  Dynamic Search Algorithm in Unstructured Peer-to-Peer Networks seminar surveyer 3 2,823 14-07-2015, 02:24 PM
Last Post: seminar report asees
Brick XML Data Compression computer science crazy 2 2,387 07-10-2014, 09:26 PM
Last Post: seminar report asees
  Data Security in Local Network using Distributed Firewalls computer science crazy 10 14,918 30-03-2014, 04:40 AM
Last Post: Guest
  GREEN CLOUD -A Data Center Approach computer topic 0 1,536 25-03-2014, 10:13 PM
Last Post: computer topic
  Dynamic Synchronous Transfer Mode computer science crazy 3 4,574 19-02-2014, 03:29 AM
Last Post: Guest
  3D-OPTICAL DATA STORAGE TECHNOLOGY computer science crazy 3 8,508 12-09-2013, 08:28 PM
Last Post: Guest
  Security in Data Warehousing seminar surveyer 3 9,923 12-08-2013, 10:24 AM
Last Post: computer topic

Forum Jump: