\documentclass[12pt]{article}
\newcommand{\MyFullName}{Billy Clifton \and Alex Padgett}
%\renewcommand{\rmdefault}{phv} % Arial
%\renewcommand{\sfdefault}{phv} % Arial
\usepackage{hyperref}
\usepackage{enumerate}
\usepackage{titletoc}
\usepackage{graphicx}
\include{pythonlisting}
\title{\textbf{Tweetography}}
\author{\MyFullName}
\date{\textbf{Faculty Advisor:} Fred Annexstein}
\begin{document}
\maketitle
\thispagestyle{empty}
\setcounter{page}{0}
\newpage
\thispagestyle{empty}
\setcounter{page}{0}
\setcounter{tocdepth}{3}
\titlecontents{section}[2.2em]
  {}
  {\contentslabel{3em}}%change the argument to obtain the
                         %desired spacing
  {\hspace*{-2.6em}}
  {\titlerule*[0.2pc]{.}\contentspage}
\tableofcontents
\newpage
\def\thesection{\Roman{section}.}
\def\thesubsection{{\hspace*{-1em}}}
\section{Project Description}

\subsection{Background}
\paragraph{}Arguably one of the biggest problems existing in society today is that we have too much information.  Projects such as IBM’s Watson hope to solve this problem by creating methods from which large amounts of information can be stored and actually used efficiently. Samuel J. Palmisano, Chairman, President and CEO of IBM has said:

\begin{quotation}\emph{
The essence of making decisions is recognizing patterns in vast amounts of data, sorting through choices and options, and responding quickly and accurately.}
\end{quotation}
\paragraph{}
Online social networking is an extremely new and prevalent phenomenon in today’s society. Twitter, one of many robust social networks, allows over 75 million users to share status updates in real time.  According to Twitter, users are sending up to 140 million tweets on average, per day.   Such a large amount of information in the public domain has sparked many researchers to investigate what useful information can be extracted from conversational data between users.  Many companies can now use sites like Twitter to examine information such as consumer opinions of their products.  By infusing approaches to solving the “information problem” like Watson with social networking, the possibilities are endless.
\subsection{Problem Statement}
\paragraph{}
Due to the popularity of social networking sites such as Twitter, there is currently a large amount  amount of data that is being generated at any given point in time.  For the average user, breaking down such a large amount of information is impossible, much less in real time. We hope to address the need for an application that can present meaningful information about current events  in real time.
\newpage
\subsection{Project Goals}
\paragraph{}
To create a web application that collects tweets from Twitter and produces valuable information to clue users into what events are happening in their local area.  We aim to provide a “proof of concept” that will outline a smaller scale implementation of what could potentially be accomplished with more time.

\paragraph{Subgoals}
\begin{enumerate}
\item 
Exploring data mining, an important area of research in Computer Science, in the context of the social media phenomenon.
\item
Developing a database model to encapsulate meaningful characteristics of the data.
\item
Creating a program which can process large amounts of data from Twitter and produce a subset of the data which represents local events.
\item
Exploring programming platforms to expose features which best fit our plans for implementation.
\end{enumerate}
\newpage
\section{Interface Specifications}
\begin{figure}[hb]
\vspace{-20pt}
\includegraphics[scale=0.7]{images/InterfaceSpecs.jpg}
\vspace{-40pt}
\end{figure}

\newpage
\section{Test Plan and Results}
\paragraph{Overall Test Plan}
The data coming from our project only comes and goes in one form, text.  The large focus of our project deals with natural language processing and extracting information from conversational data.  From this, most of our testing focuses on the integrity and correctness of the data.  Since the problem posed by the program is figuring out how to filter event-based data from text, testing to find out if what the program returns is an event seems like a circular problem.  We can however use black box testing to ensure that all data that comes in has a proper output in the format we want it to to make sure that errors in things like data types are eliminated.  The rest of the testing to determine the actual integrity of the data will most likely have to be performed by hand.

\subsection{Parser Integrity Test}
\paragraph{Purpose}The purpose of this test is to ensure that data stored by the parser is appropriately formatted with respect to the database schema.
\paragraph{Description}Tweets downloaded from Twitter can either be stored into Javascript Object Notation (JSON) files for later use or returned in real time as JSON formatted entries streamed directly from Twitter. After receiving the data, the parser should manipulate and format it appropriately with respect to the database schema.
\paragraph{Input}JSON file
\paragraph{Output}String information regarding the user who tweeted, the tweet itself and coordinates for where the user tweeted from.
\paragraph{Test Type}
\begin{itemize}
\item{Abnormal}
\item{White box}
\item{Functional}
\item{Unit}
\end{itemize}

\subsection{Relevant Event Data}

\paragraph{Purpose}The purpose of this test is to ensure that data deemed confident by the parser contains information consistent with the specifications defined by the regular expressions in the parsing module.
\paragraph{Description}This test is likely the most important.  Tweets deemed confident by the parser must be examined by the developers. Developers must verify that the program is accurately selecting data which is in agreement with the specifications defined by the regular expressions used within the parser.
\paragraph{Input}Database of aggregated tweets
\paragraph{Output}Subset of tweets that are accurately picked up by the regular expressions.
\paragraph{Test Type}
\begin{itemize}
\item{Normal}
\item{White box}
\item{Performance}
\item{Integration}
\end{itemize}

\newpage
\subsection{Relative Time Test}
\paragraph{Purpose}The purpose of this test is to ensure that the data gathered from Twitter presented on the TheTweetographer.com is current.
\paragraph{Description}Before being committed to the database, events are assigned a date approximated by the parser.  This test requires checking dates assigned to tweets and removing old tweets from the web page once they are obsolete.
\paragraph{Input}Set of user events extracted from tweets.
\paragraph{Output}Subset of tweets that are approximated to occur on or after the current date.
\paragraph{Test Type}
\begin{itemize}
\item{Boundary}
\item{Black box}
\item{Functional}
\item{Unit}
\end{itemize}
   
\newpage
\subsection{Time of Access Test}
\paragraph{Purpose}The purpose of this test is to make sure that data is provided in real time to the user.
\paragraph{Description}Essentially this test will ensure that tweets posted will be evaluated by the parser and presented to the web page in real time. This test will be a measure of the time it takes for any confident tweet to appear on the web page after it has been posted on Twitter.
\paragraph{Input}Tweet from Twitter feed.
\paragraph{Output}The outputs from the program should be locations/times of events.
\paragraph{Test Type}
\begin{itemize}
\item{Normal}
\item{Black box}
\item{Performance}
\item{Integration}
\end{itemize}

\subsection{Test Case Matrix}
\paragraph{}
\begin{tabular} {| l | c | c | c | c |}
\hline
Test Case & 1 & 2 & 3 & 4 \\ \hline
Parser Integrity & Abnormal & White Box & Functional & Unit \\ \hline
Relevant Event Data & Normal & White Box & Performance & Integration \\ \hline
Relative Time Test & Boundary & Black Box & Functional & Unit \\ \hline
Time of Access Test & Normal & Black Box & Performance & Integration \\ \hline
\end{tabular}
\newpage
%Test Plan and Results (describe execution and results of tests)
\subsection{Results} 
\paragraph{}
The testing for our project revolved around two python modules, the client and parser, as well as the data passing through the modules.  At the beginning of the testing portion of our project, we isolated the data by removing the client from the internet and running the regular expressions over a set of roughly 2,000 tweets collected from Twitter.  

\paragraph{Parser Integrity}This test is essentially a black box test used to ensure that any data presented to the parser is mapped to an appropriate output format.  From the general format of data collected from Twitter, we can make assumptions about what type of data the parser will be presented with.

\begin{table}[ht]
\centering
\begin{tabular}{| c | c | c |}
\hline
Tweet Attribute & Input Type & Output Type \\ \hline
Username & String & String \\
Text & String & String \\
Date & - & Date \emph{(YYYY-MM-DD)} \\
Location & Tuple &  Float (x2) \\ 
Score & - & Integer \\ \hline
\end{tabular}
\caption{Twitter Data Types}
\end{table}

To execute this test, we used our test set of roughly 2,000 tweets to feed information collected from Twitter into the parser.   After passing through the parser, the test results were formatted and inserted into the database.  As a result of this test, we found that a lot of the text contained within Twitter contained unicode characters.  This posed a problem as the database could only handle UTF-8 characters.  As a fix, if a record cannot be inserted to the database a UnicodeEncodeError is thrown by the python code in the parser, and the text is re-encoded into UTF-8 so that it can be inserted into the database.

\newpage
\paragraph{Relevent Event Data}Testing the validity of the data returned by the parsing module was and still is an ongoing process.  To ensure the integrity of what is stored as a confident tweet, we used a program called \\Expresso (\url{http://www.ultrapico.com/Expresso.htm}) to evaluate our regular expressions.  Once we had our regular expressiosn in place, we began by running the parser over the test data and examining the output.  To further refine the data caught by the parser, we went through the confident tweets returned by the parser and examined for false positives.  Upon finding such a false positive, we placed the tweet text into Expresso, and evaluate exactly which portion of the regular expression was picking up the incorrect data.

As a result of these tests, we have increased the accuracy of the parser by quite a bit since it's first draft.  However, there are still some false positives that occur such as dates contained within Twitter usernames.  This is a very slow process, and we plan to continue refining the parser using this method.

\paragraph{Relative Time Test}The execution of this test is very simple.  To ensure that all tweets are up to date, we can simply order the MySQL tweet table by the EventDate column, or query for any date which is less than the current.

\paragraph{Time of Access Test}This test was also easily executed.  Our biggest concern with this test was the amount of data coming from Twitter's Streaming API.  We were unsure if the program would be able to run the regular expressions over tweets, evaluate their scores and upload them to the database as they came in.  However, upon opening the program to the stream we found that there was much less traffic than we had thought.  To test this, we created a Twitter account and tweeted something from within the bounding box.  As expected, due to the time taken for the tweet to be placed into the database and the accesses the web page must make to retrieve it, this process takes roughly 3-5 seconds.  We think this is good enough for 'real time'.
\newpage

%User Manual (include photos and/or screenshots; give step-by-step instructions for how an end-user would use your project; for software, consider a screen capture %tool such as Jing) 
\section{User Manual}
\subsection{TheTweetographer.com}
\begin{center}
\includegraphics[scale=0.75]{images/viewer.png}
\end{center}
\newpage
\begin{enumerate}[A.]
\item{\textbf{New Events}}\\
\indent The New Events section displays the last five resolved events. This section will automatically update without interaction from the user.
\item{\textbf{Top Events}}\\
\indent The Top Events section displays the five highest scored events based on The Tweetographer's confidence scoring scheme.
\item{\textbf{Calendar}}\\
\indent To display the number of events on a given day, users can hover their cursor over a date entry on the calendar. By selecting that date the event table will be populated with events from the selected date. Users may also change the month by clicking the “Previous” and “Next” links located directly below the calendar.
\item{\textbf{Event table}}\\
\indent The event table is a table of tweets that represents an event on the selected date along with its confidence scoring. The user has the ability to sort the columns by clicking on the column headers at the top of the table.
\end{enumerate}
\subsection{iPhone App}
\indent Use of the iPhone application is rather trivial.  Like the website, the application displays two sections which serve the same functionality as the "New Events" and "Top Events" functions serve listed above.  Illustration shown on page \pageref{marker}.

\subsection{Starting the Tweet Parser} 
\indent The parser module of the program is a Python script.  To be able to run this script, the user will need the $pycurl$ and $pymysql-db$ libraries for the Python language.  \\

Users can run the script using an IDE like the one included with python (IDLE), or via the command line using the command: \emph{python tweet.py}. The parser will then begin collecting tweets.  Upon receiving tweets, the user can examine all tweets in the console and see which tweets are distinguished between "junk" and "confident" tweets.  
\newpage

%Final Results (document what was accomplished; what are your final deliverables?)
\section{Final Results}
\subsection{Deliverables}
\paragraph{TheTweetographer.com}Our final version of the website.  TheTweetographer.com presents information from the MySQL database through a calendar from which users can select a date and view all events for that date.  Events are displayed in a table which is sorted by confidence in descending order.
\begin{center}
\includegraphics[scale=0.5]{images/website.jpg}
\end{center}
\newpage

\paragraph{iPhone App}Using the Sencha Touch framework, we created a web-based application for the iPhone and iPad which uses Javascript and PHP to present users with a mobile interface to The Tweetographer.  The application provides a live feed of tweets as they are committed to the database as well as a list of the top tweets retrieved so far.
\label{marker}
\begin{center}
\includegraphics[scale=0.75]{images/iphone.jpg}
\end{center}

\newpage
\paragraph{Scoring Function}Using regular expressions, we developed a simple scoring function to grade tweets on how confident we were that they contained information about a potential event.  Different combinations of criteria met by the tweet also add bonus scores to push tweets with the most useful information to the top of the rating scale.
\begin{center}
\includegraphics[scale=0.75]{images/scoring.jpg}
\end{center}
\newpage
\paragraph{Parser Module}The following code is a representative excerpt from the parsing module.  This shows the procedure the parser follows after recieving each tweet.  The parser keeps track of how many tweets it has examined to date, and updates the count in the MySQL database.  This count is updated roughly every 5 minutes.  Once a tweet is recieved, the parser checks to see how long since it's last update, and updates the count if it has been longer than 5 minutes.  After, the tweet is converted from a JSON record to an instance of the container class $tweetObject$, which holds information such as username, text, longitude, latitude, matched regex flags, and event date.  This object is then passed into the parser, where it is examined by the regular expressions.  If the tweet matches in any of the 8 regular expressions, it is then passed to the weighting function to determine if it should be committed to the database.
\begin{python}
def on_receive(self, data):
    self.check_timer() #checks time since last update to counter
    self.buffer += data
    content = ""
    if data.endswith("\r\n") and self.buffer.strip():
        self.counter += 1
        content = json.loads(self.buffer)
        self.buffer = ""
        if "text" in content:
            t = tweetObject() #container class for tweets
            t.userName = u"{0[user][screen_name]}"
            t.text = u"{0[text]}".format(content)
            try: #if there are coordinates, get them too
                coordinates = u"{0[coordinates][coordinates]}"
                t.longitude = float(coordinates[0].replace('[',''))
                t.latitude = float(coordinates[1].replace(']',''))
            except TypeError:
                    pass        
            self.p.translate(t)
 def translate(self,tweet):
     tweetTest = self.isGood(tweet) #run tweet through regexes
     if tweetTest: #if tweet matches any regexes
         self.weightTweets(tweetTest) #score tweets
\end{python}
\newpage

%Statement of Goals vs. Accomplishments
%Give a one-page overview (at least two paragraphs) of what you intended to do and what you were able to accomplish; include a discussion of design decisions and %factors that contributed to the final product.
\section{Goals vs. Accomplishments}
\paragraph{}When this project first started, we hoped that anything we produced would be something that others could experience and be just as interested and motivated as we were in producing it.  As Computer Science majors, we felt that we needed to search beyond simple software.  We wanted something that would express one of the things we feel CS celebrates above all, creativity. In that spirit, we set off to create.
\paragraph{What We Planned to Do}Upon starting this project, we were pretty naive to the fact that we had a limited amount of time to do this in.  Deadlines seemed to fly by, and the project started to form.  Initially, we had dreams of producing software that would crawl the $entire$ Twitter network, process tweets and produce a very concrete definition of what was going on.  Our initial design hoped to plot points on a map of the city, detailing where potential events were happening.  We wanted to aggregate tweets talking similar events and give people all of the information (address, time, date, etc...) needed to determine what and where things were going on.
\paragraph{What We Did}After spending time on this plan, we came to realize that it was not feasible in such a limited amount of time.  Our first problem was how to actually define an event.  We found that parsing natural language and developing methods for the program to produce detailed information would involve writing a much more complex program than we had time for.  We also began to see that locations people were tweeting from were usually unlikely to be the location of the event they were tweeting about.

More so, the information needed to pinpoint the location of an event was not always available. We decided that our original plan placed too much burden on the program to spoon feed data to the user and began to revise it.  Instead of giving users a formulaic version of an event, we settled for an approach that shared some of this burden with the user.  By implementing a scoring function, we found a way to effectively filter and present only data that was meaningful (on a calendar rather than a map) while leaving the rest of the work to the user.  The end result is a different answer to our original question but proves that with more time and effort, our original goals were not so far fetched after all. 

\newpage

%Self-Assessments
%A.	Initial Self-Assessment (fall quarter)
%B.	Final Self-Assessment (spring quarter) [what did you specifically contribute to this project?  what did you do? what did you learn? give at least two paragraphs %with 6 sentences minimum; address both technical and non-technical learning that resulted from your senior project activities; this is an individual assignment.]
\section{Self-Assessments}
\subsection{Initial Self-Assessment}
\paragraph{Billy}My senior design project is to build a web application that will crawl twitter posts to see what events are
happening in geographical areas and then plots it on a local map. This will also give me the ability to
study and understanding the importance extracting and analyzing data from social networks. I feel that
this project will be very challenging yet very useful if successful.\\

Most of the course work I've taken will help me in this project. Database Design will be most helpful since
there will be a lot of data that we will need to store and have quick access to. We will need to come up
with an efficient database schema that will store tweet attributes, such as tweet rank and location.
Another very useful course for this project that I have taken is Software Engineering. Software
Engineering stressed the importance of fully understanding the problem and having a good software
design which will be vital for our success for this project. Algorithms will also help since we will need to
use ranking algorithms and pattern recognition to rank and sort tweets.\\

Along with course work my co-op experience will also help tremendously with this project since all my co-
ops involved web development. My first co-op was with the National Radio Astronomy Observatory and
required me to research, design and implement a wizard like web application to help astronomers
formulate their observe ring techniques. It was built on a D'Jango web framework and powered by a
PostgreSQL database and Google's Web Toolkit. I believe that my design experience from this co-op will
definitely contribute to the design and understanding of web applications needed for this project. My
second Co-op experience was with a marking firm named HyperDrive Interactive. I was a web
programmer for them which gave me the opportunity to create and manage many web applications for
them, including, a store locator which used Google maps and a video contest website that used YouTube
for hosting. I spent a lot of time reading Google’s documentation for their applications which we will use
in our senior project.
\newpage
\paragraph{Alex}Our project is a study of the ability to extract important, relevant information from
social networking sites such as Twitter, Facebook or MySpace. We plan to crawl Twitter
posts to extract information about what events are taking place in specific geographical
locations, such as cities or college campuses.\\

As far as curriculum goes, the majority of the skills that have prepared me for this
project are related to graph theory. Classes such as Algorithm Design and Analysis II and
Discrete Computational Structures II have introduced us to graph theory, and Algorithms
specifically has provided real-life examples of where it can be applied. One of these
applications happens to be social networking sites where friends create edges between the
nodes on the graph by “friending” or “following” eachother. Software Engineering has also
given me the tools to effectively research, design, document and implement a program on
the scale that we hope to produce. Also obviously being a Computer Science major, I have
picked up a lot of information about programming and different languages that will allow
us to choose the appropriate language and implement the project cleanly.\\

While my co-op experience has been limited to one company, I feel that I have
picked up a few skills there that might help me with this project. Most of my co-op
experience has been in programming and software development. Through the flaws and
successes of my experience I have seen what disorganization and lack of research can do to
the software development process. I have become extremely familiar with database
interactions, which I assume that we will be using on this project as well as some limited
experience with crawling XML pages for information. I’m also familiar with project
management and source revision via Visual Studio and subversion (SVN), so code
management shouldn’t be a problem with this project. The company I work with, American
Technology \and Services, is primarily an Aerospace Engineering company and through this I
have developed communications skills to effectively communicate ideas to co-workers who
come from non-computer related disciplines. These may or may not be important to our
project, but certainly can’t hurt us.
\newpage
\subsection{Final Self-Assessment}
\paragraph{Billy}Even though I was in a small group which worked close together on most of our project, I was solely responsible for a few different aspects of the project. My main responsibility was our website. I was in charge of setting up and managing our web server and database. I was also responsible for programming the home, about, and contact pages. Using php, javascript, css, and sql, I was able to create our live events feed, top event list, and calendar that displayed all of our final results. Another task I had was to come up with the initial architecture design. \\

The Tweetographer was a great non-technical and technical learning experience for me. One of the biggest non-technical things I’ve learned was how popular and valuable twitter has become in our society. Your average person all the way up to the president and professional athletes, have been consumed in this online experience. Before this project I never used twitter and never knew the value of the user generated content of social networks. But now I’m realizing just how much valuable data is out there and how much you can do with that data. As for some of the technical things I’ve learned, my biggest hurdle was managing incoming streams from a third party over the web. I took a lot of studying and testing to finally get the data to start flowing in without any errors. After that we had problems with losing our connection and missing data. However, I feel like I finally understand and have a grasp of how to handle streaming data. It also allowed me to learn how to use the python module pycurl which helps handle internet requests.
\newpage
\paragraph{Alex}After the project was in full swing, Billy and I decided that we would take advantage of each other's expertise in pertinent areas of the project and split the work right down the middle.  My own responsibilities were solely to work on the parsing module and come up with the code for all of the back end of the program, and Billy's responsibility was to design the webpage and integrate the database into it.  What started as a very technical endeavor of finding the correct framework and languages to use, really turned into a very conceptual learning experience for me.  During my time developing the parser, I spent a lot of thought and effort towards figuring out a way to define an event in a very defined, concrete fashion.  It's actually kind of funny how the project really came together in a period of two or three weeks because once we defined the scoring function, this definition that we were looking for really fell into our laps.  \\

Conceptually, one of the key things that I really learned from this project was how to attempt to solve solutions like ours with a realistic approach.  Obviously our immediate goals were very far sighted or naive, but once we determined that they weren't realistic, we became very discouraged.  I feel that this is a very important experience especially for creative projects that don't necessarily have a right or wrong answer like ours.  Looking back on this, I really learned that tackling a problem is not always done by going at it head on, but rather strategically approaching it from the side.  I'm very proud of what we've done with this strategy because we took something that we thought was so overwhelmingly impossible and produced results, and interesting ones at that.\\

Technically however, I learned a lot along the way of researching for this project.  Even after the project has been completed, I still find myself looking into new libraries and languages to find the best fit for our project.  In producing the iPhone application I learned a lot about web frameworks and languages, something I haven't been very mindful of in the past.  Most of the python scripting I was very familliar with already, but there were some obvious learning opportunities when we tried to hook into the Twitter API.  If I've learned or gained anything from this project, I think that it has somewhat inspired me to use more of my free time into coming up with newer, creative ideas and implementing them into my own creation.  It's been a very valuable experience, but hopefully the end is not in plain sight.
\newpage

%Summary of Hours and Billing
%Give a quarter summary for each team member (hours and amount) for all three quarters; give a total for each team member for the year (hours and amount); give a 
%total for the project (hours and amount); a brief justification of the activities for each team member associated with the hours should be given.
\section{Billable Hours}
\begin{table}[h]
\centering
\begin{tabular}{| c || c | c | c | c |} \hline
Task & \multicolumn{2}{| c |}{Hours} & \multicolumn{2}{| c |}{Amount}\\  \cline{2-5}
& Billy & Alex & Billy & Alex \\ \hline
Database & 3 & 3 & \$225 & \$225 \\ 
Parser Module & 5 & 30 & \$375 & \$2250 \\ 
TheTweetographer.com & 30 & 0 & \$2250 & \$0 \\ 
iPhone Application & 0 & 8 & \$0 & \$600\\ 
Design Poster & 8 & 8 & \$600 & \$600\\ 
Testing & 14 & 14 & \$1050 & \$1050 \\ \hline
\textbf{Winter Quarter} & 33 & 35 & \$2475 & \$2625 \\ 
\textbf{Fall Quarter} & 32 & 32 & \$2400 & \$2400 \\ 
\textbf{Total} & 125 & 130 & \$9375 & \$9750 \\ \hline
\textbf{Project Total} & \multicolumn{2}{| c |}{255} &  \multicolumn{2}{| c |}{\$19125} \\ \hline
\end{tabular}
\end{table}
\subsection{Tasks}
\small
\paragraph{Database}Development on the database entailed designing the schema, implementing constraints to ensure no duplicate records could be inserted, as well as adding and subtracting new fields as features were added to the project.
\paragraph{Parser Module}Design and implementation of the regular expressions, scoring system and interaction with both the Twitter API and the MySQL database.  
\paragraph{TheTweetographer.com}Design and implementation of the web site.  This includes setting up a host, the overall design template of the site, implementation of the interactive calendarl and PHP interaction with the database to retrieve the newest and top rated tweets.
\paragraph{iPhone Application}Design and implementation of a web-based iPhone application.  Using Javascript and the Sencha Touch library, as well as PHP to interface with the database and retrieve the newest and top rated tweets to display on the app.
\paragraph{Design Poster}  Time taken to design and refine the poster displayed at the expo.
\section{Expenses}
\begin{table}[h]
\centering
\begin{tabular}{| p{10cm} | c |} \hline
\centering Item & Cost \\ \hline
\centering Server and Domain Name & \$47.08 \\ \hline
\centering \textbf{Total Cost} & \textbf{\$47.08} \\ \hline
\end{tabular}
\end{table}
\end{document}
