Parallel Computing In India
#3

Presented by
Justin Reschke

[attachment=10295]
Parallel Computing
Concepts and Terminology:
What is Parallel Computing?

Traditionally software has been written for serial computation.
Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
Concepts and Terminology:
Why Use Parallel Computing?

Saves time – wall clock time
Cost savings
Overcoming memory constraints
It’s the future of computing
Concepts and Terminology:
Flynn’s Classical Taxonomy

Distinguishes multi-processor architecture by instruction and data
SISD – Single Instruction, Single Data
SIMD – Single Instruction, Multiple Data
MISD – Multiple Instruction, Single Data
MIMD – Multiple Instruction, Multiple Data
Flynn’s Classical Taxonomy:
SISD
Serial
Only one instruction and data stream is acted on during any one clock cycle
Flynn’s Classical Taxonomy:
SIMD
All processing units execute the same instruction at any given clock cycle.
Each processing unit operates on a different data element.
Flynn’s Classical Taxonomy:
MISD
Different instructions operated on a single data element.
Very few practical uses for this type of classification.
Example: Multiple cryptography algorithms attempting to crack a single coded message.
Flynn’s Classical Taxonomy:
MIMD
Can execute different instructions on different data elements.
Most common type of parallel computer.
Concepts and Terminology:
General Terminology

Task – A logically discrete section of computational work
Parallel Task – Task that can be executed by multiple processors safely
Communications – Data exchange between parallel tasks
Synchronization – The coordination of parallel tasks in real time
Concepts and Terminology:
More Terminology
Granularity – The ratio of computation to communication
 Coarse – High computation, low communication
 Fine – Low computation, high communication
Parallel Overhead
 Synchronizations
 Data Communications
 Overhead imposed by compilers, libraries, tools, operating systems, etc.
Parallel Computer Memory Architectures:
Shared Memory Architecture

All processors access all memory as a single global address space.
Data sharing is fast.
Lack of scalability between memory and CPUs
Parallel Computer Memory Architectures:
Distributed Memory

Each processor has its own memory.
Is scalable, no overhead for cache coherency.
Programmer is responsible for many details of communication between processors.
Parallel Programming Models
Exist as an abstraction above hardware and memory architectures
Examples:
 Shared Memory
 Threads
 Messaging Passing
 Data Parallel
Parallel Programming Models:
Shared Memory Model

Appears to the user as a single shared memory, despite hardware implementations.
Locks and semaphores may be used to control shared memory access.
Program development can be simplified since there is no need to explicitly specify communication between tasks.
Parallel Programming Models:
Threads Model

A single process may have multiple, concurrent execution paths.
Typically used with a shared memory architecture.
Programmer is responsible for determining all parallelism.
Parallel Programming Models:
Message Passing Model

Tasks exchange data by sending and receiving messages.
Typically used with distributed memory architectures.
Data transfer requires cooperative operations to be performed by each process. Ex.- a send operation must have a receive operation.
MPI (Message Passing Interface) is the interface standard for message passing.
Parallel Programming Models:
Data Parallel Model

Tasks performing the same operations on a set of data. Each task working on a separate piece of the set.
Works well with either shared memory or distributed memory architectures.
Designing Parallel Programs:
Automatic Parallelization

Automatic
 Compiler analyzes code and identifies opportunities for parallelism
 Analysis includes attempting to compute whether or not the parallelism actually improves performance.
 Loops are the most frequent target for automatic parallelism.
Designing Parallel Programs:
Manual Parallelization

Understand the problem
 A Parallelizable Problem:
Calculate the potential energy for each of several thousand independent conformations of a molecule. When done find the minimum energy conformation.
 A Non-Parallelizable Problem:
The Fibonacci Series
 All calculations are dependent
Designing Parallel Programs:
Domain Decomposition

Each task handles a portion of the data set.
Designing Parallel Programs:
Functional Decomposition
Each task performs a function of the overall work
Parallel Algorithm Examples:
Array Processing

Serial Solution
 Perform a function on a 2D array.
 Single processor iterates through each element in the array
Possible Parallel Solution
 Assign each processor a partition of the array.
 Each process iterates through its own partition.
Parallel Algorithm Examples:
Odd-Even Transposition Sort

Basic idea is bubble sort, but concurrently comparing odd indexed elements with an adjacent element, then even indexed elements.
If there are n elements in an array and there are n/2 processors. The algorithm is effectively O(n)!
Initial array:
 6, 5, 4, 3, 2, 1, 0
6, 4, 5, 2, 3, 0, 1
4, 6, 2, 5, 0, 3, 1
4, 2, 6, 0, 5, 1, 3
2, 4, 0, 6, 1, 5, 3
2, 0, 4, 1, 6, 3, 5
0, 2, 1, 4, 3, 6, 5
0, 1, 2, 3, 4, 5, 6
Worst case scenario.
Phase 1
Phase 2
Phase 1
Phase 2
Phase 1
Phase 2
Phase 1
Other Parallelizable Problems
The n-body problem
Floyd’s Algorithm
 Serial: O(n^3), Parallel: O(n log p)
Game Trees
Divide and Conquer Algorithms
Conclusion
Parallel computing is fast.
There are many different approaches and models of parallel computing.
Parallel computing is the future of computing.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: parallel computing abstract**ood banking**# **dac interfacing with 8086 ppt, thuimai india essays, parallel computing projects, thumai india, technical seminar on parallel computing for cse, thimai india, parallel computing in remote sensing data,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
RE: Parallel Computing In India - by projectsofme - 26-11-2010, 04:59 PM
RE: Parallel Computing - by seminar class - 16-03-2011, 11:58 AM
RE: Parallel Computing In India - by tjoneluv - 29-01-2012, 09:29 PM

Possibly Related Threads...
Thread Author Replies Views Last Post
  A SEMINAR REPORT on GRID COMPUTING Computer Science Clay 5 16,415 09-03-2015, 04:48 PM
Last Post: iyjwtfxgj
Star Internet Telephony Policy in INDIA Computer Science Clay 3 3,396 21-09-2014, 06:10 PM
Last Post: Guest
  Soft Computing seminar surveyer 2 11,285 29-10-2013, 03:50 PM
Last Post: kavitaswami93gmail.com
  Modular Computing seminars report computer science crazy 4 22,201 08-10-2013, 04:32 PM
Last Post: Guest
  self managing computing system full report computer science technology 5 14,263 18-05-2013, 09:48 AM
Last Post: computer topic
  Unicode And Multilingual Computing computer science crazy 2 8,320 06-05-2013, 11:18 AM
Last Post: computer topic
  What Networking of Information Can Do for Cloud Computing project topics 1 8,284 29-03-2013, 01:03 AM
Last Post: Guest
  pervasive computing full report computer science technology 11 18,544 02-03-2013, 11:34 AM
Last Post: seminar details
  Nanocell Logic Gates For Molecular Computing full report seminar presentation 3 10,138 02-01-2013, 10:21 AM
Last Post: seminar details
  GREEN COMPUTING A SEMINAR REPORT Computer Science Clay 10 32,112 31-12-2012, 10:40 AM
Last Post: seminar details

Forum Jump: