\documentclass[11pt,a4paper,ngerman]{article}

\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[ngerman]{babel}
\usepackage{caption}
\usepackage[automark]{scrpage2}
\usepackage{geometry}
\usepackage{blindtext}
\usepackage{setspace}
\usepackage{subfig}
\usepackage{float}
\usepackage{wrapfig}
\usepackage{framed}
\usepackage{color}
\usepackage[table]{xcolor}
\usepackage{tabularx}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{dirtree}
\usepackage[yyyymmdd,hhmmss]{datetime}
\usepackage{svn-multi}
\usepackage{hyperref}
\usepackage[printonlyused,nohyperlinks]{acronym}
\usepackage{pdfpages}

\begin{document}

\section*{PCA -- excercise 04 -- Matr.-Vect.-Mult. with pthreads \\\\ \tiny{group: pra05 -- Matthias Heisler, Steffen Lammel}}

\subsection*{Result}
As we can see in the graphics in the following section, the best speedup is achieved at 4-8 threads. This has been expected, as the
creek-nodes have CPUs with 4 physical and 4 logical ("hyperthreading") cores. With more threads, the achieved speedup drops
further, as the managment overhead for thread-invocation and management exceeds the gains.
\\
The parallelized Matrix-Vector-Multiplication works best with medium-sized matrixes (500x500 and 1000x1000). At 10000x10000 the 
performance becomes stable at a certain point. The speedup resides at $\approx1.4$x which indicates that the bottleneck here is somewhere else.
We suspect the DRAM: in the previous cases, the problem may have fitted into the CPUs caches.

\subsection*{Visualisation}
\begin{tabular}{c c}
	
	\includegraphics[page=1,scale=.6]{mvmult2/result.pdf} & \includegraphics[page=2,scale=.6]{mvmult2/result.pdf} \\
	\includegraphics[page=3,scale=.6]{mvmult2/result.pdf} & \includegraphics[page=4,scale=.6]{mvmult2/result.pdf} \\

\end{tabular}

\end{document}

