\input{format.tex}


\section*{Aug 08, 2014}
As had been exploit by \cite{wang_2011}, with enough Geographical marks one is able to locate any IP to any given precision. I need to revisit the topic to see what I can do besides this.

The first thing I can do is to explore the precision that can be achieved by limited number of known web landmarks. Collecting web landmarks is a tedious and error-prone work. We can argue that it is hard to determine that an IP address indeed belongs to some region. This method works best for big cities where many landmarks can be located, while in sparse areas like north country it may not work as expected. Our idea is to reside on reliable but limited landmarks, such as university location and their IP, to study the geolocation problem. More specific, we want to see 1. with only University websites, what kind of accuracy can we achieve. 2. How many landmarks do we need to achieve expected accuracy.

Data to prepare: what I already have: university name, website and IP, IP info (latitude and longitude).

1. We already noticed that some universities use webpage hosting service to provide their webpage, which should be excluded from our data set. This can be done by querying the webpage's IP address twice and check the result. If the result returned are different, this should be excluded.

2. The purified data result can be used as landmarks. According to our data, we have at least one university for every 1 degree of lat/long.(roughly 100x100km) in the eastern part. The western part is more sparse but we still have at least one point per 500km. So the problem can be shrinked to: with an area that is surrounded by some landmarks, can we estimate the geolocation in that? 

3. We have checked the result of CBG and TBG, we need to also check Octant. But these methods seem all have problems of accuracy. Do we still need to use these delay-constraint methods to determine the possible region? According to my test, the possible region is always too large, and that's the core reason that delay-constraint methods have big error values. 



Another thing we can do is to continue working on the idea of geolocation. Many people assume that network latency has something to do with geographic distance. However this seems not true. If it is true then it would be easy to locate a point by simple mathematical method. Instead, the network latency should be determined by some other factors. Our research can focus on using some methods to analyze the relationship between network latency and geographic location.

\section*{Aug 10, 2014}

Prof. Katz-Bassett mentioned in his email that he thinks ``The NSDI11 paper Tom linked does this, although they used proprietary information, and there are a number of problems with their approach.''

I have had similar thought about this, although my starting point is just that they are using too much landmarks and it can not be guarenteed that one can always find enough landmarks in the region he wants to study. So my idea is to work on smaller size, fixed location of landmarks (universities) and use herustic functions to guess the location of a given IP. But the argument will be much stronger if we can claim that the NSDI 11 method is indeed impractical and ours is more practical.

\textbf{Idea:} Collect ping data from some fixed probes to these universities, and learn patterns from these data. Will there be any relationship between the location and the ping response. I believe so. However this may not be accurate enough to pin down the IP address to some cities. We can also use route info to do this.

If Ping data is not accurate enough, we can also think about using trace route info. Traceroute is thought to provide more information than ping. But it is also harder to convert this info to some learnable pattern. If we can achieve this, I believe this will be a much more valuable result than the geolocation problem itself. 

\section*{Aug 11, 2014}
Talked to Jeanna and start working on this new idea. 

The first step is to wash out data that may be problematic. We will re-locate the IP address for each website, and query their whois info to make sure that they own the network. If some website owns more than one ip from different subnets, or their IP address is owned by some non-edu facilities, we will mark it for further manual check.

We will then use IPAddressLab.com's information to further check whether this IP address belongs to the same state this institute belongs to. If not, it will be marked for manual check.

\section*{Aug 12, 2014}
Finished washing university dataset and have 2015 data remains. The next step is to determine whether to use ping or traceroute to collect the data we needed. 

The advantage of ping is that ping data is easy to collect and the model is simple. Assuming we have $n$ probes $p_1,...,p_n$, and the ping result returned from these probes to a given destination is $d_1,...,d_n$. Then the data $d_1,...,d_n$ normally forms a vector that we can use to do learning.

Traceroute will definitely provide more information, but it is harder to think about a model that we can fit traceroute data in and expect it to form a good model that helps us to learn from it. 

Why don't I start working on the measurement today? The primary reason is that I am not sure which probes I should choose as the ventage points. Ripe Atlas anchors are more reliable but their numbers are too small. There are only 8 ripe atlas anchors in US, and their distributions are not identical. I am afraid this choice will make our measurement inaccurate. However if I choose individual probes from different regions, I am afraid that they will go offline some day and thus affect the measurement result. A compromise will be first use the anchor and then choose some probes from regions that are not covered by these anchors. If some of the probes go offline, we can choose another probe from nearby region instead. 

Currently we have anchors in the eastern part and we need some observing points in the northern and middle part. We choose some cities that has large amount of probes. They are Minneapolis (10 probes available),  Denver(13 probes available), Spokane(4 probes available), St. Lowes, MO (5 probes available) and Detroit (26 probes available)

\section*{Aug 13, 2014}
It is found that some information provided by IPAddressLab.com doesn't confirm with information from RipeAtlas. We should trust information from RipeAtlas. We choose to trust RipeAtlas information.

Today I start generating measurements from anchors. There are 10 anchors in US, located in 7 separated regions. 

I use 5 packets from 7 probes, which consumes 70 credit per university. This means a consumption of 140000 credits in total, which is an acceptable amount in case of any error.

\section*{Aug 14, 2014}
I choose to record up-to-date latitude/longitude information in ping\_anchor table in case some ip address change locations after we measure them.

\section*{Aug 15, 2014}
In order to get more landmark information, we collect city government website information. We choose citys that have top populations in each state, query google for their official website, then check ip addresses those websites correspond to.

We wash the data by following steps:  We first check the city this ip address belongs to. If the ip address belongs to that city, then we definitely choose it as the candidate. Otherwise, we check the owner of the ip address. If the owner is the city, we also mark it to be belong to the city. We also use GeoLiteCity database to locate these websites. We check 2800 cities, and only 760 of them are for sure located in that city.

\section*{Aug 27, 2014}

I have spent the past 10 days reading books and papers talking about Neural Network and Kernel Method. I think neural network/radial basis network may be a good candidate to evaluate the city position in the last hop. With previous techniques, we can locate the IP address in a range within 100 km. The remaining range is within a single hop and a method based on Radial basis network should work fine.

\section*{Aug 28, 2014}

The first thing we need to verify is the data relevance in these IP ping information. Question: is the latency still sensitive to distance when they are far away enough? This may be able to be done by using Automatic relevance determination (ARD). If this assumption is proven true, then we can have a section blaming Constraint-based-geolocation. Section 6.4.4 gives a good example of using ARD to determine the relevance of parameters.

\section*{Aug 29, 2014}
Split the data to two sets, train and test. Use early stop technique.

\section*{Aug 31, 2014}
Now I need to determine what model we should use. Naturally we can directly use Neural Network with sigmoid and linear perceptron. However if we want to have a distribution result, Gaussian Mixture Model can be used. In a Gaussian Mixture Model, each Gaussian component models one or more hops. Radial basis network is another choice. With the guess that latency and geographical distance is symmetric, Radial basis predict the next point's value with current data, which is exactly what we want.

Today I tried three methods to train a neural network. MLP\_80 is a multiple layer perceptron with 80 sigmoid hidden layer and linear output layer. MLP\_100 increase the hidden neurons to 100. RB is a network with radial basis functions. With 1067 neurons RB gives a minimal square error of 4, which leads to a $\sim$200km error. TODO Here in the neural network we have no penalty function to out-of-border situation. Add some penalty functions to the neural network performance may helps reduce the neural network size.

The next step is to apply a method I call ``Local adjustment'' to the result. We start by collecting points that goes around the predicted area in 8 directions, making sure we find one in each direction. Points that are close to the destination and the middle line are preferred. We setup some threshold to make sure we collect enough adjancent points. 

Assume that we collected latency data $\textbf{x}_n$ on landmark at position $\textbf{p}_n$ around the local area. For the unknown points, the observed value is \textbf{x}, and we want to know its estimated position $\textbf{p}$.

We first describe a framework that solve this as an optimization problem. We start by considering that all data are accurate and comes with no error. Within this small area, we would assume that the difference between lantecy can be calculated by $k(\textbf{x},\textbf{x}')$, which is a continuous function. Thus for each landmark, we have an estimate circle with center at $\textbf{t}_n$ and radius $k(\textbf{x},\textbf{x}_n)$ that denotes the estimated position $\textbf{p}_n'$ from $\textbf{p}_n$'s point of view. The real $\textbf{p}$ should have a minimal distance to all these points, which can be written as the following optimization problem:

\begin{align*}
\text{Minimize: }&  \sum_{i=1}^N\left | \hspace{1mm}|| \textbf{t}- \textbf{t}_n || - k(\textbf{x},\textbf{x}_n) \hspace{1mm}\right | \\
\end{align*}

The function $k(\textbf{x},\textbf{x}_n)$ describe the relationship between latency difference and geographical locations. We calculate this by assuming that within a small area, the latency and the distance is propotional. The relationship is calculated by taking all pointwise values into account. 
\begin{align*}
&k(\textbf{x},\textbf{x}') = \textbf{k}^T(\textbf{x}-\textbf{x}') \\
&\textbf{k} = \frac{1}{N}\sum_{i,j}^N \frac{|| \textbf{t}_i - \textbf{t}_j||}{(\textbf{x}_i - \textbf{x}_j)}
\end{align*}

Now we take the measurement error into account. Assume the measurement result for each landmark is governed by a Gaussian distribution $\mathcal{N}(0, \beta^{-1})$ which describes the possibility that another points may have similar measurement results. This means the real position of n-th landmark is now a random variable with distribution $\mathcal{N}(\textbf{t}_n,\beta^{-1})$. This will change the problem to a probablistic optimization problem. I will come back to this later.

Finally, we will use the city map information to adjust the output. We do this by appending the city location information to the optimization problem. For each city $c_i$ in the given area, we assume a non-normalized Gaussian distribution sit on that. The height of this Gaussian is determined by the population of that city normalized in the field. Now this is no longer a linear problem when we take this into account. Again this problem can be solved using Neural Network or non-linear optimization.   

\section*{Sep 1, 2014}
Rewrite the note to make it more clear.

Working on creating Gaussian Process to deal with this. The advantage of Gaussian Process is by choosing different kernel function can we try different ways to explore the relationship between adjancent points.

\section*{Sep 2, 2014}

Finish writing the Gaussian Process code, start working on the paper

\section*{Sep 3, 2014}
I am thinking about using Monte Carlo to simulate the position of the distribution of a given points when we use the probabilistic linear programming model. More specifically, we randomly look for points in the given distribution, solve the problem, and use clustering methods to cluster the result into Gaussian distributions.  

\section*{Sep 6, 2014}
Othman suggests that after finishing the draft papers, I can go on working on doing comparison between my method and the method mentioned in ``Towards Street Level'', using same dataset.

\section*{Sep 8, 2014}
Othman suggests instead of saying ``two-layer nn'', we should use words like ``two-stage nn'', cause layers always mean layers in a network.

\section*{Sep 9, 2014}
Othman suggests that we should add more sentences describing two-tier structure in the title and abstract to make it more attractive.

\section*{Sep 10, 2014}
The following diagrams need to be created for RBF: 
\begin{itemize}
\item Radial-basis function network error
\item How many points can be found around each points in the given region
\item Radial-basis function network error in different densities
\end{itemize}

The following diagrams need to be created fro MLP
\begin{itemize}
\item Error rate of MLP in these small regions. In this diagram we show a general result, which shows the error for each of these points. 
\end{itemize}

Instead of starting from the big network, I start from the data itself, determine a radius and calculate surrounding points. This should not be the correct way to do this. However, due to time limit I have to do this first.

\section*{Sep 13, 2014}
When using traceroute, we just care about the last several hops in network path. This can be solved by using a kernel method like Markov.

\end{document}

