\documentclass[11pt, a4paper]{article}
\usepackage{fullpage}
\usepackage{amsfonts}
\usepackage[latin1]{inputenc}
\usepackage[english]{babel}
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{amsmath,amssymb}
\usepackage{graphicx}
\usepackage{fancyhdr}           %Min header begynder
\pagestyle{fancy}
\fancyhf{}
\setlength{\headheight}{10pt}
\setlength{\headsep}{25pt}
\fancyhead[L]{Assignment 3}  %Header venstre
\fancyhead[C]{Johan Sivertsen}			 %header center
\fancyhead[R]{Algs 2012, ITU} 		 %header h�jre
\fancyfoot[R]{\thepage} 				 %Sidetal i bunden
\renewcommand\headrulewidth{0.4pt}
\renewcommand\footrulewidth{0.4pt}

\author{Johan Sivertsen}
\title{Introduction to algorithms, Assignment 3: Search Engine}
\date{\today}
\begin{document}
\maketitle
\newpage
\section*{Introduction}

The report should motivate the data organisation you used, and give some complexity bounds on (1) time for searching, (2) time for building the data structure, and (3) space usage of your data structure.
\subsection{Data structure and resource consumption}

I choose to make a data structure consisting of a hastable where the keys are strings and the values are hashsets. This allows me to put, get and remove data in constant time. All of these operations are essential for the assignment and choosing a data structure that allows me to perform them in constant time makes the search very fast. It also makes it very easy to code because I can make full use of the Collection interface. 

For following code builds the data structure:
\begin{verbatim}
1    String s = r.readLine();
2            for (String word:s.split("\\W+")){
3                HashSet<Integer> set = table.get(word.toLowerCase());
4                if(set ==null) set = new HashSet<Integer>();
5                set.add(linecount);
6                table.put(word.toLowerCase(),set);            	
7           }
8           linecount++;
\end{verbatim}
This is executed for every line in the file. I will use n as the number of lines and m as the average number of words pr. line. For every line, for every word I have a hashtable lookup(3), a comparison(4),a hashset add(5), and a hashtable put(6). All of these run in constant time. The upper bound for constructing the data structure should be O(n*m).\cite{}

\subsection{Search time}



\end{document}

