HADOOP
#1

HEY I NEED COMPLETE REPORT ON THE TOPIC "HADOOP".
Reply
#2
Apache Hadoop is a Java software framework that supports data-intensive distributed applications .Hadoop project develops open-source software for reliable, scalable, distributed computing.Apache Hadoop enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google's MapReduce and Google File System (GFS) papers. Hadoop Distributed File System (HDFS) stores large files across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence does not require RAID storage on hosts


for more information please visit
http://hadoop.apache
http://studentbank.in/report-distributed...-framework
http://en.wikipediawiki/Hadoop
Reply
#3
Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. In this , the applications can work on thousands of nodes and petabytes of data. Google's MapReduce and Google File System (GFS) papers inspired the development of HADOOP. It is a is a top-level Apache project with contributors from all over the world and it is developed using the using the Java programming language.
Hadoop History
-Dec 2004 “ Google paper published¢
- July 2005 “ Nutch uses new MapReduce implementation¢
- Jan 2006 “ Doug Cutting joins Yahoo!¢
- Feb 2006 “ Hadoop becomes a new Lucene subproject¢
- Apr 2007 “ Yahoo! running Hadoop on 1000-node cluster¢
-Jan 2008 “ An Apache Top Level Project¢
- Feb 2008 “ Yahoo! production search index with Hadoop¢
- July 2008 “ First reports of a 4000-node cluster

Architecture
Hadoop is designed to efficiently process large volumes of information by connecting many commodity computers together to work in parallel. It groups together many computers producing a single cost-effective compute cluster.
the supported filesystems includes:
-HDFS: It is the Hadoop's own rack-aware filesystem.
-CloudStore which is rack aware.
-Amazon S3 filesystem
-FTP Filesystem where the data are stpred in FTP servers
-HTTP and HTTPS file systems which are read only

Hadoop Distributed File System

It stores stores large files across multiple machines and the data is replicated across multiple hosts. data is stored on three nodes of which two on the same rack, and one on a different rack.

Job Tracker and Task Tracker: the MapReduce engine
MapReduce engine comes above the file systems. It consists of:
-Job Tracker, to which client applications submit MapReduce jobs.
-Task Tracker: the Job Tracker distributes work out to available Task Tracker nodes in the cluster. node containing the data is known to the Job Tracker and also about the machines which are nearby.

For further details, refer these links:
http://en.wikipediawiki/Hadoop

pdf is available in these links:
http://scribddoc/12021062/Hadoop
http://scribddoc/7136281/Hadoop-Primer
http://scribddoc/21962428/Hadoop-Infrastucture
http://scribddoc/23227066/Apache-Hadoop
Reply
#4
Thumbs Up 
[attachment=5460]
HADOOP

INTRODUCTION

Computing in its purest form, has changed hands multiple times. First, from near the beginning mainframes were predicted to be the future of computing. Indeed mainframes and large scale machines were built and used, and in some circumstances are used similarly today. The trend, however, turned from bigger and more expensive, to smaller and more affordable commodity PCs and servers.
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.


Cloud computing is a style of computing in which dynamically scalable and often virtualize resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.
Need for large data processing

We live in the data age. It’s not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the “digital universe” at 0.18 zettabytes in 2006, and is forecasting a tenfold growth by 2011 to 1.8 zettabytes.
Some of the large data processing needed areas include:-

• The New York Stock Exchange generates about one terabyte of new trade data per day.

• Facebook hosts approximately 10 billion photos, taking up one petabyte of storage.

• Ancestry.com, the genealogy site, stores around 2.5 petabytes of data.

• The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20 terabytes per month.

• The Large Hadron Collider near Geneva, Switzerland, will produce about 15 petabytes of data per year.


The problem is that while the storage capacities of hard drives have increased massively over the years, access speeds—the rate at which data can be read from drives have not kept up. One typical drive from 1990 could store 1370 MB of data and had a transfer speed of 4.4 MB/s,§ so we could read all the data from a full drive in around five minutes. Almost 20 years later one terabyte drives are the norm, but the transfer speed is around 100 MB/s, so it takes more than two and a half hours to read all the data off the disk. This is a long time to read all data on a single drive—and writing is even slower. The obvious way to reduce the time is to read from multiple disks at once. Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.This shows the significance of distributed computing.

Challenges in distributed computing --- meeting hadoop

Various challenges are faced while developing a distributed application. The first problem to solve is hardware failure: as soon as we start using many pieces of hardware, the chance that one will fail is fairly high. A common way of avoiding data loss is through replication: redundant copies of the data are kept by the system so that in the event of failure, there is another copy available. This is how RAID works, for instance, although Hadoop’s filesystem, the Hadoop Distributed Filesystem(HDFS), takes a slightly different approach.
The second problem is that most analysis tasks need to be able to combine the data in some way; data read from one disk may need to be combined with the data from any of the other 99 disks. Various distributed systems allow data to be combined from multiple sources, but doing this correctly is notoriously challenging. MapReduce provides a programming model that abstracts the problem from disk reads and writes transforming it into a computation over sets of keys and values.
This, in a nutshell, is what Hadoop provides: a reliable shared storage and analysis system. The storage is provided by HDFS, and analysis by MapReduce. There are other parts to Hadoop, but these capabilities are its kernel.
Hadoop is the popular open source implementation of MapReduce, a powerful tool designed for deep analysis and transformation of very large data sets. Hadoop enables you to explore complex data, using custom analyses tailored to your information and questions. Hadoop is the system that allows unstructured data to be distributed across hundreds or thousands of machines forming shared nothing clusters, and the execution of Map/Reduce routines to run on the data in that cluster. Hadoop has its own filesystem which replicates data to multiple nodes to ensure if one node holding data goes down, there are at least 2 other nodes from which to retrieve that piece of information. This protects the data availability from node failure, something which is critical when there are many nodes in a cluster (aka RAID at a server level).
For more information about this article,please follow the link:
http://googleurl?sa=t&source=web&cd=1&ve...hadoop.doc&ei=geuvTMvuIIqsvgPVloXrBg&usg=AFQjCNFRI_QUy3pVGVNMr9Y6GUanR8z6bQ
Reply
#5
SUBMITTED BY:
MEGHSHREE GUPTA
SHIVANGI VARSHNEY

[attachment=13362]
Chapter 1
Introduction

Computing in its purest form, has changed hands multiple times. First, from near the beginning mainframes were predicted to be the future of computing. Indeed mainframes and large scale machines were built and used, and in some circumstances are used similarly today. The trend, however, turned from bigger and more expensive, to smaller and more affordable commodity PCs and servers.
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.
Cloud computing is a style of computing in which dynamically scalable and often virtualize resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.
Chapter 2
Background study

Need for large data processing.
We live in the data age. It’s not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the “digital universe” at 0.18 zettabytes in 2006, and is forecasting a tenfold growth by 2011 to 1.8 zettabytes.
Some of the large data processing needed areas include:-
i. The New York Stock Exchange generates about one terabyte of new trade data per day.
ii. Facebook hosts approximately 10 billion photos, taking up one petabyte of storage.
iii. Ancestry.com, the genealogy site, stores around 2.5 petabytes of data.
iv. The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20 terabytes per month.
v. The Large Hadron Collider near Geneva, Switzerland, will produce about 15 petabytes of data per year.
The problem is that while the storage capacities of hard drives have increased massively over the years, access speeds—the rate at which data can be read from drives have not kept up. One typical drive from 1990 could store 1370 MB of data and had a transfer speed of 4.4 MB/s,§ so we could read all the data from a full drive in around five minutes. Almost 20 years later one terabyte drives are the norm, but the transfer speed is around 100 MB/s, so it takes more than two and a half hours to read all the data off the disk. This is a long time to read all data on a single drive—and writing is even slower. The obvious way to reduce the time is to read from multiple disks at once. Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.This shows the significance of distributed computing.
Challenges in distributed computing --- meeting hadoop
Various challenges are faced while developing a distributed application. The first problem to solve is hardware failure: as soon as we start using many pieces of hardware, the chance that one will fail is fairly high. A common way of avoiding data loss is through replication: redundant copies of the data are kept by the system so that in the event of failure, there is another copy available. This is how RAID works, for instance, although Hadoop’s filesystem, the Hadoop Distributed Filesystem(HDFS), takes a slightly different approach.
The second problem is that most analysis tasks need to be able to combine the data in some way; data read from one disk may need to be combined with the data from any of the other 99 disks. Various distributed systems allow data to be combined from multiple sources, but doing this correctly is notoriously challenging. MapReduce provides a programming model that abstracts the problem from disk reads and writes transforming it into a computation over sets of keys and values.
This, in a nutshell, is what Hadoop provides: a reliable shared storage and analysis system. The storage is provided by HDFS, and analysis by MapReduce. There are other parts to Hadoop, but these capabilities are its kernel.
Hadoop is the popular open source implementation of MapReduce, a powerful tool designed for deep analysis and transformation of very large data sets. Hadoop enables you to explore complex data, using custom analyses tailored to your information and questions. Hadoop is the system that allows unstructured data to be distributed across hundreds or thousands of machines forming shared nothing clusters, and the execution of Map/Reduce routines to run on the data in that cluster. Hadoop has its own filesystem which replicates data to multiple nodes to ensure if one node holding data goes down, there are at least 2 other nodes from which to retrieve that piece of information. This protects the data availability from node failure, something which is critical when there are many nodes in a cluster (aka RAID at a server level).
COMPARISON WITH OTHER SYSTEMS
Comparison with RDBMS
Unless we are dealing with very large volumes of unstructured data (hundreds of GB, TB’s or PB’s) and have large numbers of machines available you will likely find the performance of Hadoop running a Map/Reduce query much slower than a comparable SQL query on a relational database. Hadoop uses a brute force access method whereas RDBMS’s have optimization methods for accessing data such as indexes and read-ahead. The benefits really do only come into play when the positive of mass parallelism is achieved, or the data is unstructured to the point where no RDBMS optimizations can be applied to help the performance of queries.
But with all benchmarks everything has to be taken into consideration. For example, if the data starts life in a text file in the file system (e.g. a log file) the cost associated with extracting that data from the text file and structuring it into a standard schema and loading it into the RDBMS has to be considered. And if you have to do that for 1000 or 10,000 log files that may take minutes or hours or days to do (with Hadoop you still have to copy the files to its file system). It may also be practically impossible to load such data into a RDBMS for some environments as data could be generated in such a volume that a load process into a RDBMS cannot keep up. So while using Hadoop your query time may be slower (speed improves with more nodes in the cluster) but potentially your access time to the data may be improved.
Also as there aren’t any mainstream RDBMS’s that scale to thousands of nodes, at some point the sheer mass of brute force processing power will outperform the optimized, but restricted on scale, relational access method. In our current RDBMS-dependent web stacks, scalability problems tend to hit the hardest at the database level. For applications with just a handful of common use cases that access a lot of the same data, distributed in-memory caches, such as memcached provide some relief. However, for interactive applications that hope to reliably scale and support vast amounts of IO, the traditional RDBMS setup isn’t going to cut it. Unlike small applications that can fit their most active data into memory, applications that sit on top of massive stores of shared content require a distributed solution if they hope to survive the long tail usage pattern commonly found on content-rich site. We can’t use databases with lots of disks to do large-scale batch analysis. This is because seek time is improving more slowly than transfer rate. Seeking is the process of moving the disk’s head to a particular place on the disk to read or write data. It characterizes the latency of a disk operation, whereas the transfer rate corresponds to a disk’s bandwidth. If the data access pattern is dominated by seeks, it will take longer to read or write large portions of the dataset than streaming through it, which operates at the transfer rate. On the other hand, for updating a small proportion of records in a database, a traditional B-Tree (the data structure used in relational databases, which is limited by the rate it can perform seeks) works well. For updating the majority of a database, a B-Tree is less efficient than MapReduce, which uses Sort/Merge to rebuild the database.
Another difference between MapReduce and an RDBMS is the amount of structure in the datasets that they operate on. Structured data is data that is organized into entities that have a defined format, such as XML documents or database tables that conform to a particular predefined schema. This is the realm of the RDBMS. Semi-structured data, on the other hand, is looser, and though there may be a schema, it is often ignored, so it may be used only as a guide to the structure of the data: for example, a spreadsheet, in which the structure is the grid of cells, although the cells themselves may hold any form of data. Unstructured data does not have any particular internal structure: for example, plain text or image data. MapReduce works well on unstructured or semistructured data, since it is designed to interpret the data at processing time. In otherwords, the input keys and values for MapReduce are not an intrinsic property of the data, but they are chosen by the person analyzing the data. Relational data is often normalized to retain its integrity, and remove redundancy. Normalization poses problems for MapReduce, since it makes reading a record a nonlocal operation, and one of the central assumptions that MapReduce makes is that it is possible to perform (high-speed) streaming reads and writes.
Reply
#6
(12-03-2010, 03:53 PM)pri_niture Wrote: HEY I NEED COMPLETE REPORT ON THE TOPIC "HADOOP".

Reply
#7


to get information about the topic "HADOOP" full report ppt and related topic refer the page link bellow

http://studentbank.in/report-hadoop--24728

http://studentbank.in/report-hadoop--11163

http://studentbank.in/report-hadoop
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: hadoop abstract for seminar report, hadoop configuration, hadoop seminar topic abstract, hadoop tutorial, hadoop type ppt, seminar topic on hadoop and pdf, hadoop seminar full report,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: