\section*{Tokenization and Feature Extraction}
A sentence on which to calculate the sentiment should be tokenized to keep mark up in the text and thereby preserve useful meanings of different words. An example is that text written in uppercase should be interpreted as shouting, and text between HTML tags such as 'em' or 'strong' should perhaps be interpreted as more important than other text. Other example includes emoticons and profanities. 
In this project, Potts tokenizer is used. It performs well on especially emoticons. Furthermore the extraction of features is also handled by the tokenizer. The thought is that a word can negate the following words. For example a sentence could be "I really dont like them at all.". Here the tokenizer would convert the sentence into tokens where each word would be a token, but the word 'like' if read by itself would usually have a positive sentiment. This problem is solved by adding the suffix '_NEG' to all words following a negation word. In the above example, it would result in the following tokens: "I" "really" "dont" "like_NEG" "them_NEG" "at_NEG" "all_NEG" "." The tokenizer stops the negation when a punctuation occurs, i.e. a sentence is stopped. The tokenizer does not stem the tokens, because this will perhaps remove some positive/negative distinction in the tokens.