\section{ETL Tool}
	
	In order to populate the data warehouse with the readings from the BagTrack system, an Extract-Transform-Load (ETL) tool is needed. This tool operates in three phases:
	
	\begin{enumerate}
		\item Initially, the \textit{extract} step queries the BagTrack database for a new part of reading data, loading it to memory.
		\item The \textit{transform} step manipulates the data in memory, such that it will fit into the data warehouse schema.
		\item Finally, the \textit{load} step inserts the transformed data into the data warehouse.
	\end{enumerate}
	
	One of the main benefits of our warehouse model is the possibility of analyzing the paths of bags. This does, however, introduce an interesting problem into the design of the ETL tool, as the data for a specific bag-path pair will almost never be complete:
	
	\begin{itemize}
		\item If the bag is underway, we know its planned route, but we only know the locations it has already been to.
		\item If a bag moves to an airport that is not yet covered by the BagTrack system, (henceforth known as \textit{outside the system},) we will likely never get any more information about it.
	\end{itemize}
	
	Due to the fact that readings are continously made, the ETL tool must inherently support incomplete paths. It would also be beneficial to be able to import small amounts of readings often, rather than importing huge chunks of readings at a lower frequency, since larger chunks would not help the incompleteness problem -- rather, it would only make the ETL process more complicated.
	
	The ETL tool is therefore designed to accept single readings from the BagTrack database. This way, theoretically, the warehouse could be immediately updated every time a new reading is done. This improves the capability of on-the-fly analysis for the data warehouse.
	
	\subsection*{Design}
		
		The design of our ETL tool is slightly unconventional, due to some auxiliary loading being performed in the transform step. It may also extend the duration of facts, since a bag's path is modeled as a series of \textit{stays} in specific locations, and a new reading may change our knowledge of the previously recorded stay of some bag. It will, however, never remove facts.
		
		Figure \ref{fig:etl_diagram} shows how, in contrast to the typical image of an ETL tool, the load step is not separated from the extract and transform steps, but rather loading functions are called from the transform code, in order to insert the newly transformed data into the warehouse. The reason for loading some data during transformation is clear from the way the transform step treats locations and bags, see below.
		
		\begin{figure}[h!tb]
			\includegraphics[width=\columnwidth]{parts/images/etlDiagram.pdf}
			\caption{Control and data flow of our ETL tool.}
			\label{fig:etl_diagram}
		\end{figure}
		
		The extract phase is quite trivial -- it merely queries the BagTrack database, and puts the data into an internal data structure. The least trivial part of it, is the parsing of the original reader location code into a location type, used in our warehouse. These are types such as \textit{check-in}, \textit{sorter}, or \textit{gateway}. They are parsed from the original location code using simple string pattern matching.
		
		The transform phase finds the IDs of the dimensional values for Date, Time, Location and Bag. If the location or bag did not previously exist in the warehouse, the original data is transformed and inserted.
		In order to determine what else to do, the algorithm in figure \ref{fig:etl_transform_algo} is used, described as pseudo-code. The input is the \texttt{reading} and the \texttt{bag}.
		
		\begin{figure*}[h!tb]
			\begin{lstlisting}
new_fact = null
if first_record_of(bag)
{
	new_fact = create_fact(reading)
	new_fact.duration = 0
	new_fact.lost = !bag.is_location_in_route(reading.location)
}
else
{
	last_fact = lookup_last_fact(bag)
	if last_fact.airport == reading.airport
	{
		last_fact.duration = reading.timestamp - last_fact.timestamp
		last_fact.update_in_warehouse()
		
		if last_fact.location != reading.location
		{
			new_fact = create_fact(reading)
			new_fact.duration = 0
			new_fact.lost = last_fact.lost
		}
	}
	else
	{
		new_fact = create_fact(reading)
		new_fact.duration = 0
		if last_fact.lost
		{
			new_fact.lost = true
		}
		else
		{
			new_fact.lost = !last_fact.is_next_airport(reading.airport)
		}
	}
}
if new_fact != null
{
	new_fact.insert_into_warehouse()
}
			\end{lstlisting}
			\caption{Algorithm for Fact updating and creation by the ETL tool.}
			\label{fig:etl_transform_algo}
		\end{figure*}
		
		Firstly, if the reading was of a bag that does not yet exist in the data warehouse, a new fact with a zero-duration is simply created and inserted into the warehouse.
		If the bag is already known to the system, the last recorded fact of that bag is looked up. If this old fact is in the same airport as the new reading, the old fact's duration is updated such that the bag's stay in that location lasts until this new reading was made.
		Next, if the last fact was in a different location in the same airport, we also create a new fact of duration zero and insert it.
		Finally, if the new reading was instead in a new airport, a new fact with duration zero is simply inserted.
		
		The reason we do not update the duration when the has changed airport, is that this would create incorrect behaviour during analysis. This would effectively give each last stay before departure a duration that is way too long. For example, if a bag goes to a gateway, and the next reading is in the arrival at another airport 4 hours later, the warehouse would say that the bag stayed in the gateway for 4 hours.
		
		Note that the durations that are non-zero are only estimates, since the RFID readers only generate instantaneous ``I am here'' readings. We simply have no way of knowing for sure when a bag leaves a certain location, so we estimate that it stays in a location until it arrives at the next one, with the aforementioned exception of airport changes.
		
		When a new bag enters the system, its lost status is set depending on whether the airport is in its planned route. This is because a bag can enter the system at any point in its route, if its point of departure is outside the system.
		
		If it has moved within the same airport, its lost status is set to the same as the last fact. We have no knowledge of the layout of the sorter systems in the airports, which means that we ca not predict if a bag is on its way to a flight going to the wrong airport. If readers were to be placed on the conveyor belts loading bags into airplanes, we could at least detect a bag going the wrong way before the airplane leaves, though at that point it may already be too late to take the bag out of the airplane.
		
		Finally, given that the bag has moved to a new airport, if the bag was lost at the last airport it will still be lost. If this airport is directly after the last one in the route plan, it will not be lost.