# Collection Quiz 1

## PRAM (parallel random access machine)

- lockstep (synchronized clock cycles between processors)
- NUMA:
  - non-uniform memory access
  - uniform memory access (monolithic, what we are using)
- single-cycle instruction execution and memory access (unrealistic) (unit-cost model)

### PRAM memory models

EREW, CREW, CRCW

E ... exclusive, C ... concurrent

### PRAM conflict resolution strategies (collision)

- common: conflicting processors must write the same value
- arbitrary (i.e. random): one write succeeds
- priority: using a priority scheme

### Flynn's taxonomy

SISD|MISD
--:|:--
SIMD|MIMD

I ... instruction, D ... data, S ... single, M ... multiple

- SISD: normal single processor computer with non-vectorized instructions
- MISD: not used in practice
- SIMD: vectorized instructions operating on multiple data
- MIMD: PRAM (each processor executing its own instruction stream, processing its stream of data)
  - SPMD (single-program multiple-data): a single program is executed for multiple data (i.e. threads)

### absolute speedup

$$
S_p(n) = \frac{T_{seq}(n)}{T_{par}(p, n)}
$$

### relative speedup

$$
SRel_p(n) = \frac{T_{par}(1, n)}{T_{par}(p, n)}
$$

### cost

$$
C(p, n) = p T_{par}(p, n)
$$

optimality: if $C(p, n)$ is $O(T_{seq}(n))$ for a best known seq. algorithm $Seq$ $\implies$ linear speedup

### work

optimality: if $W_{par}(p, n)$ is in $O(T_{seq}(n))$ for a best known seq. algorithm $Seq$ $\implies$ strong potential for linear speedup (not guaranteed)

### optimality summarized

![optimalities](./res/optimalities.png)

### optimal amount of processors

$$
T_\infty(n) = min\{T_{par}(p, n)\} : \forall p \in \mathbb{N}
$$

### parallelism

$$
\frac{T_{par}(1, n)}{T_\infty(n)}
$$

### Amdahl's law

$s, 0 < s \leq 1, r = (1 - s)$

for parallelized algorithm $Par$, max. speed-up over $Seq$ is $1/s$

$$
T_{par}(p, n) = s T_{seq}(n) + \frac{(1 - s) T_{seq}(n)}{p} = s T_{seq}(n) + \frac{r}{p} T_{seq}(n) = T_{seq}(n) (s + \frac{r}{p})
$$

$$
S_p(n) = \frac{T_{seq}(n)}{s T_{seq}(n) + \frac{(1 - s) T_{seq}(n)}{p}} = \frac{T_{seq}(n)}{T_{par}(p, n)}
$$

speed-up asymptotically: $\frac{1}{s}\text{ for }p \to \infty$

### efficiency

$$
E_p(n) = \frac{T_{seq}(n)}{p T_{par}(p, n)} = \frac{S_p(n)}{p}
$$

(against best possible algorithm $Seq$)

- $E_p(n) \leq 1$
- if $E_p(n) = e$ for some constant $e$, the speed-up is linear
- cost-optimal algorithms have constant efficiency

### weak scalability

![weak scalability](./res/def9.png)

![weak scalability (alternative)](./res/def10.png)

\pagebreak

### strong scalability

![strong scalability](./res/def11.png)

![strong scalability (alternative)](./res/def12.png)

### (sequential) algorithm runtimes

![some (sequential) running times](./res/some-running-times.png)

### prefix-sum

seq. complexity: $O(n)$

par. complexity (iterative and recursive):

- $T_{\infty}(n) = O(\log n) = 2 \log n$
- $T_{par}(p, n) = O(\frac{n}{p} + \log n)$

lin. speed-up up to $\frac{n}{\log n}$ processors

work-optimal, $W_{par}(p, n)$ in $O(n)$

BUT not cost-optimal (for all processors)
