Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore E Lesson 1 635

E Lesson 1 635

Published by Teamlease Edtech Ltd (Amita Chitroda), 2020-10-24 03:43:42

Description: E Lesson 1 635

Search

Read the Text Version

IDOL Institute of Distance and Online Learning ENHANCE YOUR QUALIFICATION, ADVANCE YOUR CAREER.

M.C.A 2 All right are reserved with CU-IDOL PARALLEL AND DISTRIBUTED COMPUTING Course Code: MCA635 Semester: Third SLM Unit : 1 eLession: 1 www.cuidol.in Unit-1 (MCA635)

Introduction 33 OBJECTIVES INTRODUCTION Student will be able to : In this unit we are going to learn about the Define Parallel Computing Parallel Computing . Illustrate Parallel Architecture Under this unit you will also understand the Elaborate the performance of Parallel Computers performance of Parallel computers. Evaluate the various Decision Making Process.. This Unit will also make us to understand Describe Parallel Programming Models Decision making process. Explain Parallel Algorithms Unit-1 (MCA635) INASllTITriUgThEt aOrFeDreISsTeArNveCdE AwNitDh OCNUL-IIDNOE LLEARNING www.cuidol.in

TOPICS TO BE COVERED 4 > Introduction to Parallel Computing > Parallel Architecture > Architectural Classification Scheme > Performance of Parallel Computers > Performance Metrics for Processors > Parallel Programming Models > Parallel Algorithms www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PARALLEL COMPUTING 5  It is the use of multiple processing elements simultaneously for solving any problem. Problems are broken down into instructions and are solved concurrently as each resource which has been applied to work is working at the same time.  Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.  Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

ADVANTAGES OF PARALLEL 6 COMPUTING • It saves time and money as many resources working together will reduce the time and cut potential costs. • It can be impractical to solve larger problems on Serial Computing. • It can take advantage of non-local resources when the local resources are finite. • Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes better work of hardware. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

TYPES OF PARALLELISM 7 • Bit-level parallelism: It is the form of parallel computing which is based on the increasing processor’s size. It reduces the number of instructions that the system must execute in order to perform a task on large-sized data. Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-bit integers. It must first sum up the 8 lower-order bits, then add the 8 higher-order bits, thus requiring two instructions to perform the operation. A 16-bit processor can perform the operation with just one instruction. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

TYPES OF PARALLELISM 8 • Instruction-level parallelism: A processor can only address less than one instruction for each clock cycle phase. These instructions can be re-ordered and grouped which are later on executed concurrently without affecting the result of the program. This is called instruction-level parallelism. • Task Parallelism: Task parallelism employs the decomposition of a task into subtasks and then allocating each of the subtasks for execution. The processors perform execution of sub tasks concurrently. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

WHY PARALLEL 9 COMPUTING? • The whole real world runs in dynamic nature i.e. many things happen at a certain time but at different places concurrently. This data is extensively huge to manage. • Real world data needs more dynamic simulation and modeling, and for achieving the same, parallel computing is the key. • Parallel computing provides concurrency and saves time and money. • Complex, large datasets, and their management can be organized only and only using parallel .computing’s approach www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

Why parallel computing? 10 • Ensures the effective utilization of the resources. The hardware is guaranteed to be used effectively whereas in serial computation only some part of hardware was used and the rest rendered idle. • Also, it is impractical to implement real-time systems using serial computing. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

APPLICATIONS OF 11 PARALLEL COMPUTING • Data bases and Data mining. • Real time simulation of systems. • Science and Engineering. • Advanced graphics, augmented reality and virtual reality. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

LIMITATIONS OF PARALLEL 12 COMPUTING • It addresses such as communication and synchronization between multiple sub-tasks and processes which is difficult to achieve. • The algorithms must be managed in such a way that they can be handled in the parallel mechanism. • The algorithms or program must have low coupling and high cohesion. But it’s difficult to create such programs. • More technically skilled and expert programmers can code a parallelism based program well. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

HARDWARE ARCHITECTURE 13 (PARALLEL COMPUTING) We are going to learn parallel computing for that we should know following terms: • Era of computing – The two fundamental and dominant models of computing are sequential and parallel. The sequential computing era began in the 1940s and the parallel (and distributed) computing era followed it within a decade. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

HARDWARE ARCHITECTURE 14 (PARALLEL COMPUTING) Computing – So, now the question arises that what is Computing? Computing is any goal-oriented activity requiring, benefiting from, or creating computers. Computing includes designing, developing and building hardware and software systems; designing a mathematical sequence of steps known as an algorithm; processing, structuring and managing various kinds of information www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

HARDWARE ARCHITECTURE 15 (PARALLEL COMPUTING) • Type of Computing – Following are two types of computing : • Parallel computing • Distributed computing www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PARALLEL COMPUTING 16 • Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel program consists of multiple active processes (tasks) simultaneously solving a given problem. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

HARDWARE ARCHITECTURE OF 17 PARALLEL COMPUTING The hardware architecture of parallel computing is disturbed along the following categories as given below : 1. Single-instruction, single-data (SISD) systems 2. Single-instruction, multiple-data (SIMD) systems 3. Multiple-instruction, single-data (MISD) systems 4. Multiple-instruction, multiple-data (MIMD) systems www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

COMPUTER ARCHITECTURE 18 FLYNN’S TAXONOMY • Parallel computing is a computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Parallel systems deal with the simultaneous use of multiple computer resources that can include a single computer with multiple processors, a number of computers connected by a network to form a parallel processing cluster or a combination of both. Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel computers varies accordingly and the processes of multiple CPUs must be coordinated and synchronized. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

FLYNN’S CLASSIFICATION 19 • Parallel computing is a computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Parallel systems deal with the simultaneous use of multiple computer resources that can include a single computer with multiple processors, a number of computers connected by a network to form a parallel processing cluster or a combination of both. Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel computers varies accordingly and the processes of multiple CPUs must be coordinated and synchronized. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

SINGLE-INSTRUCTION, SINGLE-DATA 20 (SISD) SYSTEMS An SISD computing system is a uniprocessor machine which is capable of executing a single instruction, operating on a single data stream. In SISD, machine instructions are processed in a sequential manner and computers adopting this model are popularly called sequential computers. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

21 SINGLE-INSTRUCTION, MULTIPLE-DATA (SIMD) SYSTEMS An SIMD system is a multiprocessor machine capable of executing the same instruction on all the CPUs but operating on different data streams. Machines based on an SIMD model are well suited to scientific computing since they involve lots of vector and matrix operations. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

MULTIPLE-INSTRUCTION, SINGLE-DATA (MISD2)2 SYSTEMS An MISD computing system is a multiprocessor machine capable of executing different instructions on different PEs but all of them operating on the same dataset . The system performs different operations on the same data set. Machines built using the MISD model are not useful in most of the application, a few machines are built, but none of them are available commercially. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

MULTIPLE-INSTRUCTION, MULTIPLE-DATA 23 (MIMD) SYSTEMS An MIMD system is a multiprocessor machine which is capable of executing multiple instructions on multiple data sets. Each PE in the MIMD model has separate instruction and data streams; therefore machines built using this model are capable to any kind of application. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

ARCHITECTURAL CLASSIFICATION 24 SCHEME Communication Architecture: Parallel architecture enhances the conventional concepts of computer architecture with communication architecture. Computer architecture defines critical abstractions (like user-system boundary and hardware-software boundary) and organizational structure, whereas communication architecture defines the basic communication and synchronization operations. It also addresses the organizational structure. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

ARCHITECTURAL 25 CLASSIFICATION SCHEME www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

ARCHITECTURAL 26 CLASSIFICATION SCHEME Programming model is the top layer. Applications are written in programming model. Parallel programming models include − • Shared address space • Message passing • Data parallel programming www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

SHARED ADDRESS 27 Shared address programming is just like using a bulletin board, where one can communicate with one or many individuals by posting information at a particular location, which is shared by all other individuals. Individual activity is coordinated by noting who is doing what task. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

MESSAGE-PASSING 28 ARCHITECTURE • Message passing is like a telephone call or letters where a specific receiver receives information from a specific sender. • It provides communication among processors as explicit I/O operations. • In message passing architecture, user communication executed by using operating system or library calls that perform many lower level actions, which includes the actual communication operation. As a result, there is a distance between the programming model and the communication operations at the physical hardware level. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

MESSAGE-PASSING 29 ARCHITECTURE • Send and receive is the most common user level communication operations in message passing system. • Send specifies a local data buffer (which is to be transmitted) and a receiving remote processor. • Receive specifies a sending process and a local data buffer in which the transmitted data will be placed. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

FUNDAMENTAL DESIGN 30 ISSUES Development of programming model only cannot increase the efficiency of the computer nor can the development of hardware alone do it. However, development in computer architecture can make the difference in the performance of the computer. We can understand the design problem by focusing on how programs use a machine and which basic technologies are provided. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

COMMUNICATION 31 ABSTRACTION • Communication abstraction is the main interface between the programming model and the system implementation. It is like the instruction set that provides a platform so that the same program can run correctly on many implementations. Operations at this level must be simple. • Communication abstraction is like a contract between the hardware and software, which allows each other the flexibility to improve without affecting the work. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PROGRAMMING MODEL 32 REQUIREMENTS • A parallel program has one or more threads operating on data. A parallel programming model defines what data the threads can name, which operations can be performed on the named data, and which order is followed by the operations. • To confirm that the dependencies between the programs are enforced, a parallel program must coordinate the activity of its threads. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

CLASSIFICATION OF 33 PARALLEL ARCHITECTURE • Pipeline Computers • Array Processors • Multiprocessors • Systolic Architecture • Data flow architecture www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

CLASSIFICATION BASED ON 34 ARCHITECTURAL SCHEME • Flynn’s classification • Shores Classification • Feng’s Classification • Handle’s Classification www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

CLASSIFICATION BASED ON 35 MEMORY SCHEME • Shared • Distributed • Hybrid www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

DATA PARALLEL 36 PROCESSING • Data parallel programming is an organized form of cooperation. Here, several individuals perform an action on separate elements of a data set concurrently and share information globally. • Another important class of parallel machine is variously called − processor arrays, data parallel architecture and single-instruction-multiple-data machines. • The main feature of the programming model is that operations can be executed in parallel on each element of a large regular data structure (like array or matrix). www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PERFORMANCE OF PARALLEL 37 COMPUTERS Two key goals to be achieved with the design of parallel applications are: • Performance : the capacity to reduce the time needed to solve a problem as the computing resources increase • Scalability: the capacity to increase performance as the size of the problem increases www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PERFORMANCE OF 38 PARALLEL COMPUTERS The main factors limiting the performance and the scalability of an application can be divided into: • Architectural limitations : 1.Latency and Bandwidth 2.Data Coherency • Algorithmic limitations: 1.Missing Parallelism 2.Communication Frequency 3.Synchronization Frequency 4.Poor scheduling www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PERFORMANCE METRICS FOR 39 PROCESSOR There are two different classes of performance metrics: Performance metrics for Processor/Cores: Access the performance of a processing unit normally done by measuring the speed or the number of operations that it does in a certain period of time. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PERFORMANCE METRICS FOR 40 PROCESSOR Performance metrics for Parallel applications :Access the performance of a parallel application, normally done by comparing the execution time with multiple processing unit against the execution time with just one processing. Some of the best Known Metrics are : • Speed • Efficiency • Redundancy • Utilization www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PARALLEL 41 PROGRAMMING MODEL A parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

CLASSIFICATION OF PARALLEL 42 PROGRAMMING MODELS Classifications of parallel programming models can be divided broadly into two areas: process interaction problem decomposition Process Interaction: Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer). www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PROCESS 43 INTERACTION Shared memory: Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to race conditions, and mechanisms such as locks, semaphores and monitors can be used to avoid these. Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PROCESS 44 INTERACTION Message passing: In a message-passing model, parallel processes exchange data through passing messages to one another. These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. The Communicating sequential processes (CSP) formalisation of message passing uses synchronous communication channels to connect processes, and led to important languages such as Occam, Limbo and Go. In contrast, the actor model uses asynchronous message passing and has been employed in the design of languages such as D, Scala and SALSA. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PROCESS 45 INTERACTION Implicit interaction: In an implicit model, no process interaction is visible to the programmer and instead the compiler and/or runtime is responsible for performing it. Two examples of implicit parallelism are with domain-specific languages where the concurrency within high-level operations is prescribed, and with Functional programming languages because the absence of side- effects allows non-dependent functions to be executed in parallel. However, this kind of parallelism is difficult to manage and functional languages such as concurrent Haskell and Concurrent ML provide features to manage parallelism explicitly. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

PARALLEL ALGORITHM 46 An algorithm is a sequence of steps that take inputs from the user and after some computation, produces an output. A parallel algorithm is an algorithm that can execute several instructions simultaneously on different processing devices and then combine all the individual outputs to produce the final result. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

WHAT IS PARALLELISM? 47 Parallelism is the process of processing several set of instructions simultaneously. It reduces the total computational time. Parallelism can be implemented by using parallel computers, i.e. a computer with many processors. Parallel computers require parallel algorithm, programming .languages, compilers and operating system that support multitasking www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

WHAT IS PARALLELISM? 48 Parallelism is the process of processing several set of instructions simultaneously. It reduces the total computational time. Parallelism can be implemented by using parallel computers, i.e. a computer with many processors. Parallel computers require parallel algorithm, programming languages, compilers and operating system that support multitasking. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

WHAT IS AN ALGORITHM? 49 • An algorithm is a sequence of instructions followed to solve a problem. While designing an algorithm, we should consider the architecture of computer on which the algorithm will be executed. As per the architecture, there are two types of computers − • Sequential Computer • Parallel Computer www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL

WHAT IS AN ALGORITHM? 50 Depending on the architecture of computers, we have two types of algorithms − • Sequential Algorithm − An algorithm in which some consecutive steps of instructions are executed in a chronological order to solve a problem. • Parallel Algorithm − The problem is divided into sub-problems and are executed in parallel to get individual outputs. Later on, these individual outputs are combined together to get the final desired output. www.cuidol.in Unit-1 (MCA635) All right are reserved with CU-IDOL


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook