Parallel Computing

Parallel Computing

many instructions are carried out simultaneously depending on the theory that large problems can often be divided into smaller ones, and then solved concurrently ("in parallel")

it is the main model in computer architecture (multiple cores)

Why did it arise?

to understand Parallel Computing, we need to understand why it arose

Serial Computing / Sequential Computing

one task(operation/instruction) at a time, in sequence

flowchart LR
d-->c-->b-->a-->x(processor)
a(task a)
b(task b)
c(task c)
d(task d)

  • due to hitting a bottleneck in terms of frequency scaling [1]
  • higher frequency = more power draw, more heat generated
  • so instead, we have been able to reduce space needed for components and pack more computational power in. More number of compute models instead of higher frequency

Silly drawing I made to understand parallel computing better

lemonade standSerial computingone processorslowold(avg wait time - 15min)๐ŸŽ“To combat low speed, we used to getbetter and better lemonade stand people(avg wait time - 5min)but , people can only be so skilledso here comesPARALLEL COMPUTINGWe realized that instead of having a super trained lemonadestand person, we could just have more people!lemonade stand(avg wait time - 2min)no, Lemonade Standโ„ข does notemploy kids. no sir.these are just tinier peopleto represent how we have beenable to fit more computationin the same spacelemonade stand

Features

one computer; many processors/cores usually share storage

  1. A problem is broken down in into smaller parts each of which is processed simultaneously by multiple cores / processors

processorsProbleminstructionst1tn

  1. Two cores sharing the same storage via a bus

ProcessorProcessorBUSShared Storage

Elements of Parallel Computing

  • Computational Problem
    three types: numerical, logical reasoning, transaction processing complex problems might need all
  • Computer Architecture Von Neumann architecture to multi-core and multi-computer lots of revolutionization
  • Performance depends on machine capability and program behaviour
  • Application Software type of computer program that performs specific functions end-user software
  • OS interface between a computer user and computer hardware file management, memory management handles basic functions
  • Hardware Architecture Single instruction single data (SISD) Multiple instruction Single Data (MISD) Single instruction multiple data (SIMD) Multiple instruction multiple data (MIMD)
  • Mapping specifies where to execute each task architectureoperating systemapplication softwareproblemalgorithms and data structuresmappingperformance evaluation

    also see : Distributed Computing Parallel vs Distributed Computing Moore's Law


    1. Frequency scaling or ramping was the dominant force in processor performance increases from the mid-1980s until roughly the end of 2004. Frequency Scaling = increasing the frequency of the processor / clock, thereby reducing runtime. โ†ฉ๏ธŽ