
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the LaTeX source for the instructions to authors using
% the LaTeX document class 'llncs.cls' for contributions to
% the Lecture Notes in Computer Sciences series.
% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
%
% It may be used as a template for your own input - copy it
% to a new file with a new name and use it as the basis
% for your article.
%
% NB: the document class 'llncs' has its own and detailed documentation, see
% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\documentclass[runningheads,a4paper]{llncs}

\usepackage{amssymb}
\setcounter{tocdepth}{3}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{epstopdf}
\usepackage{comment}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{url}
\urldef{\mailsa}\path|{jiansong.chao, whfcarter, wenlei.zhouwl, wnzhang, yyu}@apex.sjtu.edu.cn|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}
\setcounter{secnumdepth}{3}
\begin{document}

\mainmatter  % start of an individual contribution

% first the title is needed
\title{A Semantic-Driven Music Recommendation Model For Digital Photo Albums}

% a short form should be given in case it is too long for the running head
\titlerunning{A Semantic-Driven Music Recommendation Model For Albums}

% the name(s) of the author(s) follow(s) next
%
% NB: Chinese authors should write their first names(s) in front of
% their surnames. This ensures that the names appear correctly in
% the running heads and the author index.
%
\author{Jiansong Chao\and Haofen Wang\and Wenlei Zhou\and Weinan Zhang\and Yong Yu}
%
% (feature abused for this document to repeat the title also on left hand pages)

% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\institute{Department of Computer Science and Engineering\\
APEX Data \& Knowledge Management Lab\\
Shanghai Jiao Tong University \\
Shanghai, P.R.China\\
\mailsa\\
%\url{http://www.springer.com/lncs}
}

%
% NB: a more complex sample for affiliations and the mapping to the
% corresponding authors can be found in the file "llncs.dem"
% (search for the string "\mainmatter" where a contribution starts).
% "llncs.dem" accompanies the document class "llncs.cls".
%
\maketitle


\begin{abstract}
Digital photo album softwares like iPhoto\footnote{\url{http://www.apple.com/ilife/iphoto/}} have enjoyed great popularity for years. These years, online photo album services (e.g., Flickr\footnote{\url{http://www.flickr.com/}} and Picasa\footnote{\url{https://picasaweb.google.com/}}) have been becoming more and more popular with the development of social Web. In this paper, we present a semantic-driven model to recommend music for photo albums automatically. In particular, we exploit semantic data to represent both images and music. Furthermore, we leverage mining techniques to capture semantic relatedness between these different types of multimedia data, which is the essential step for recommendation. In the experiment, our method achieved a performance of about 68\% satisfaction measured by participants' feedback.


%Digital photo album service has been widely used for years and now plays as an i%ndispensable role in social network service.
%Automatically assigning relevant background music to the photo albums is an attractive %feature here. However, to the best of our knowledge, there is still no method to this issue. In %this paper, we present a novel model for automatically recommending background music %for users' photo albums. Specifically, we take advantages of Semantic Web data, WordNet, %and web mining techniques, etc. to build our model. Our work can be applied to Web album services and album softwares for personal computers and mobile devices.
\end{abstract}


\section{Introduction}
With the development of Web 2.0, digital photo album services have been becoming an indispensable part of social network sites. Representative examples such as Flickr and Facebook\footnote{\url{http://www.facebook.com/}} Photo Albums are more and more popular among people.  In September 2010, it reported that Flickr was hosting more than 5 billion images\footnote{\url{http://en.wikipedia.org/wiki/Flickr}}. A lot of social network services, such as Facebook, even provide a larger scale photo album service. By July 2011, Facebook had more than 750 million active users\footnote{\url{http://en.wikipedia.org/wiki/Facebook}} and 100 billion photos\footnote{\url{http://www.photoweeklyonline.com/}}. Moreover, mobile devices develop rapidly in recent years. More and more people use their mobile phones or iPads to take photos and create albums. It makes the scale of digital photos increase more and more rapidly. In a word, digital photo album services 
are more and more popular for personal computers, Web services and mobile devices. It is meaningful work to provide more functions and enhance user experience for digital photo album services.

Besides photo publishing and sharing, some softwares, like iPhoto even allow users to assign background music for some specified album. When browsing the photo album, it might be a fantastic experience if the background music matches the photos. For example,  it will be a nice experience if there are some pieces of romantic background music for the wedding photo album and some rock ones for a boxing match photo album. However, the manual assignment limits the wide usage of such an attractive feature: (1) it will make a user exhausted if he has lots of albums; (2) it is hard for a user to select the suitable music if he has little related knowledge. Therefore, automatic background music recommendation for photo albums can greatly improve the user experience.

In this paper, we present a semantic-driven model which is the first effort trying to recommend suitable music for the given photo album automatically.  From the technical perspective, the main challenge of automatic recommendation lies in calculating the relatedness between music and images indicating whether they share the common artistic conception or express the similar emotion. In order to solve the above challenge, we have to find a way to represent the very different data (i.e., image and music) in a unified semantic manner. With the advance of social Web, more and more multimedia data is annotated with tags. On the other hand, relatedness computing in the textual space has been well studied for years.
Therefore, we take advantage of image annotations in form of tags to represent both images and music. More precisely, we make use of Flickr as the high-quality source to prepare for a large image database annotated with tags. AllMusic\footnote{\url{http://allmusic.com/}} is used to associate mood tags to music. Further, we leverage WordNet\footnote{\url{http://wordnet.princeton.edu/}} \cite{26,27} ontology to enrich these tags and disambiguate them into synsets so that we can easily connect images with their suitable music according to their emotional semantic relatedness. In order to recommend music for the input images, we exploit the visual similarity between images so that the input images can be represented by several most similar images in our image database. The technical details can be found in the following sections. The snapshot of our demonstration is shown in Figure \ref{fig:demo}.  Photos of the album are arranged in a slide view at the bottom of the screen and the selected photo is shown at the center. Users can browse the album while listening to the recommended music which details are presented at the top right  of the screen. In our example, given a photo album of wedding dress, the playing background music recommended is \emph{Sugar Sugar} from The \emph{The Archies}.
\begin{figure}[!htb]
\small
\centering
\includegraphics[width=1.0\textwidth]{demo.eps}
\caption{The snapshot of our demonstration user interface.}
\label{fig:demo}
\end{figure}

The contributions of this paper
are threefold. First, we present a cross-media semantic relevance model and apply the model to our system of music recommendation for photo albums. Second, we study the relations between photos and music in respect of how they will affect our feelings. Last, we provide a novel and interesting way to search music.

The rest of this paper is organized as follows. In Section \ref{sec:relatedWork}, related work is presented. Section \ref{sec:methodology} and Section \ref{sec:exp} describe our method and evaluation, and in Section \ref{sec:conclusionFuture}, we conclude our work and discuss the future work.

\section{Related Work}
\label{sec:relatedWork}
There are two kinds of work related to our work. They are semantic relatedness measuring and image annotating. In the following subsections, we will discuss the related work in these two areas respectively.
\subsection{Semantic Relatedness Measuring}
Much research work has been done in the area of text semantic relatedness. There are currently three major lines of work.
One line focuses on the use of ontology. \cite{9,10,11,12,13} investigated path based approach on ontology. In \cite{14}, information content based approach for WordNet was investigated to calculate the relatedness between concepts. Gloss based approach was investigated in \cite{15} and \cite{16}. \cite{17} talked about vector based method for WordNet.
The second line takes advantage of knowledge base like Wikipedia\footnote{\url{http://en.wikipedia.org/}}. \cite{18} represented each concept as a vector whose dimension is relevance to each Wikipedia article. The relatedness of two concepts is evaluated by the cosine distance of such two vectors. Path based and information content based methods are also used for Wikipedia \cite{20}.
The third line a kind of is statistical method, where much work is based on Web. \cite{19} used search engine to get statistic data and also took advantages of patterns. ``Bag of Words'' representation further considered in \cite{29} based on \cite{19}. Moreover, Gracia \cite{30} proposed word relatedness measure based on normalized Google distance \cite{31}.
Our task is based on measuring semantic relatedness between photos and music, which has little previous work and is different from the existing semantic relatedness problems.
First, our goal is to calculate the semantic relatedness between images and music, which are two heterogeneous spaces. But traditional methods discussed above is for text. Second, we try to represent images as nouns and represent music as adjectives so we can describe images with music. The existing methods put most focuses on relatedness between noun concepts and are not suitable for relatedness between nouns and adjectives. Last, our concepts are domain-specific, so we choose Flickr as a data source for statistics to better describe relatedness between photos and moods.
\subsection{Image Annotation}
Another related research area is image annotation. Word co-occurrence model for image annotation was investigated in \cite{22}. \cite{23} and \cite{24} discussed the statistical models. Image annotation was regarded as a machine translating process in \cite{1}. Some other researchers modeled the joint probability of images regions and annotations. \cite{2} investigated image annotation under probabilistic framework and put forward a number of models for the joint distribution of image blobs and words . In \cite{5}, image annotation was posed as classification problems where each class was defined by images sharing a common semantic label. Coherent language model was investigated in \cite{25}.
For music recommendation of photo albums, one possible solution is to leverage some image annotation techniques to find the relations between images and image tags we choose. There are some particularities of the image annotations in our work. Since what we want to catch from the target image is its feeling, which makes a bridge to music. Thus for the labelled images, we need to choose some images with typical features that bring people different kinds of feelings, and for input images, we assign them to several categories to get the semantic labels. Because we care more about the feelings of images, so we choose low level image features like color and texture for image annotation.


\section{Methodology}
\label{sec:methodology}
First, we formulize the problem of music recommendation  for photo albums. Then we introduce our relevance model in detail.

\subsection{Problem Formulation}
We define $P = (p_1, p_2, \cdots, p_n)$ as the input photo album where each $p_i \in P$ is one photo belongs to the album and $n$ is the number of photos in it.
Define $\mathcal{S} = (S_1, S_2, \cdots, S_m)$ as the image semantic label space where each $S_i \in \mathcal{S}$ is a synset in WordNet and $m$ is the number
of synsets. Each $S_i$ is represented by $S_i = (s^1_i, s^2_i, \cdots, s^v_i)$ where $s^j_i$ is a semantic tag of synset $S_i$ and $v$ is the number of semantic tags in it.
Let $\mathcal{M} = (M_1, M_2, \cdots, M_k)$ be the music space where $M_i \in \mathcal{M}$ is one track for recommendation and $k$ is the number of candidate tracks.
Each $M_i \in \mathcal{M}$ is represented
by $M_i = (F^1_i, F^2_i, \cdots, F^w_i)$ where $F^j_i$ is one mood tag vector for music $M_i$ and $w$ is the number of mood tag vectors for it. Each $F^j_i$ is
represented by $F^j_i = (f^1_{ij}, f^2_{ij}, \cdots, f^u_{ij})$ where $f^k_{ij}$ is a music mood tag and $u$ is the number of music mood tags in vector $F^j_i$.

\subsection{Relevance Model}
In this section, we give a detailed description of the relevance model. Here the task is to evaluate the semantic relatedness between photo album $P$ and each track $M_i \in \mathcal{M}$.
For a target photo album, the most relevant tracks will be recommended. We define Rel($\alpha$, $\beta$) as the relatedness between $\alpha$ and $\beta$, where $\alpha$ and $\beta$ can be spaces, vectors or tags.
The semantic relatedness of a photo album $P$ and a track $M \in \mathcal{M}$
can be computed using
\begin{equation}
Rel(P, M) = Rel(P, \mathcal{S})^T \cdot Rel(\mathcal{S}, M),
\end{equation}
where vector $Rel(P, \mathcal{S})$ represents the relatedness between album $P$ and each image semantic label in $\mathcal{S}$, vector $Rel(\mathcal{S}, M)$ is the relatedness between each image semantic label in $\mathcal{S}$ and music $M$.

$Rel(P, \mathcal{S})$ is computed as
\begin{eqnarray}
Rel(P, \mathcal{S}) &=& \sum_{i=1}^{n}Rel(p_i, \mathcal{S})  \nonumber \\
&=& (\frac{1}{n}\sum_{i=1}^{n}Rel(p_i, S_1), \frac{1}{n}\sum_{i=1}^{n}Rel(p_i, S_2), \cdots, \frac{1}{n}\sum_{i=1}^{n}Rel(p_i, S_m)),
\end{eqnarray}
where Rel($p_i$, $S_j$) is the relatedness between image $p_i$ and synset $S_j$. We collect a certain number of images for each synset which is similar to ImageNet\footnote{\url{http://www.image-net.org/}} \cite{28}.
Define $Q$ as the image set representing $S_j$. $Rel(p_i, S_j)$ is computed as the sum of similarity values of $p_i$ and all the images $q \in Q$.
\begin{eqnarray}
Rel(p_i, S_j) &=& \sum_{q \in Q}(Sim(p_i, q)),
\end{eqnarray}

Similarly, $Rel(\mathcal{S}, M)$ is computed as
\begin{eqnarray}
\!\!\!\!\!\!\!Rel(\mathcal{S}, M) &=& (Rel(S_1, M), Rel(S_2, M), \cdots, Rel(S_m, M))^{\rm T} \nonumber \\
&=& (\frac{1}{w}\sum_{i=1}^{w}Rel(S_1, F_i), \frac{1}{w}\sum_{i=1}^{w}Rel(S_2, F_i), \cdots, \frac{1}{w}\sum_{i=1}^{w}Rel(S_m, F_i))^{\rm T},
\end{eqnarray}
where $Rel(S_i, F_j)$ represents the relatedness between synset $S_i$ and music mood tag vector $F_j$ which can be computed using
\begin{eqnarray}
Rel(S_i, F_j) &=& \frac{1}{uv}\sum_{x=1}^{v}\sum_{y=1}^{u}(Rel(s^x_i, f^y_j)),
\end{eqnarray}
where $Rel(s^x_i, f^y_j)$ estimates the image semantic tag $s^x_i$ and music mood tag $f^y_j$. It is a well-learnt problem in semantic relatedness area. In our model, $Rel(s^x_i, f^y_j)$ is calculated by the statistic method with data on Flickr as
\begin{eqnarray}
Rel(s^x_i, f^y_j) &=& \frac{n(s^x_i, f^y_j)}{Weight(n(f^y_j))},
\end{eqnarray}
where $n(s^x_i, f^y_j)$ is the number of occurrences of image semantic tag $s^x_i$ and music mood tag $f^y_j$ in the same photo description on Flickr.
$n(f^y_j)$ is the number of occurrences of music mood tag $f^y_j$ on Flickr. Because we want to choose the most related music mood tags for image semantic tags, we must consider the frequence of mood tags and reduce the weight of these with high frequence with function Weight. We normalize $Rel(s^x_i, f^y_j)$ to make sure too big or too small tag relatedness values
won't affect the result.

\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\subsection{Algorithm and Complexity Analysis}

\begin{algorithm}
\caption{Music Recommendation For Photo Albums.}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE Photo album $P$.
\ENSURE The most relevant track $M_i \in \mathcal{M}$.
\STATE Build Music Database for $\mathcal{M}$.
\STATE Build Image Database for $\mathcal{S}$.
\FORALL{$S_j$ in $\mathcal{S}$}
	\FORALL{$M_i$ in $\mathcal{M}$}
		\STATE Calculate $Rel(S_j, M_i)$ based on Eq.(4), (5), (6).
	\ENDFOR
\ENDFOR
\FORALL{$p_i$ in $P$}
	\FORALL{$S_j$ in $\mathcal{S}$}
		\STATE Calculate $Rel(p_i, S_j)$ based on Eq.(3).
	\ENDFOR
\ENDFOR
\FORALL{$M_i$ in $\mathcal{M}$}
	\STATE Calcute $Rel(P, M_i)$ based on Eq.(1), (2).
\ENDFOR
\STATE Select $M_i$ with the greatest $Rel(P, M_i)$.
\RETURN $M_i$.
\end{algorithmic}
\end{algorithm}

A formal description of our algorithm of music recommendation for photo albums is given in Algorithm~\ref{alg1}. Step 1 to Step 7 constructs a cross-media semantic relatedness graph offline. Step 8 to
Step 16 searches for the most relevant track using the relatedness graph, which is an online process.

Building music database in Step 1 includes getting music metadata, downloading music and expanding  mood tags to
mood tag vectors using WordNet. The time cost of Step 1 is $O(k \cdot w \cdot u)$,
where $k$ is the number of candidate tracks, $w$ is the number of mood tags for one track. After WordNet expansion, each mood tag turns to be a vector, $u$ is the number of dimensions for mood tag vectors. In practise,
$w$ and $u$ are normally small, so the most time consuming part of Step 1 is music download.
Building image database in Step 2 includes selecting WordNet synsets, preparing images for each synset and expanding
each synset to a semantic tag vector using WordNet. The time cost  of Step 2 is $O(m \cdot (v + |Q|)) $, where $m$
is the number of synsets selected, $v$ is the number of dimensions for semantic tag vectors and $|Q|$ is the number of images for each synset. Similar with Step 1, the most time consuming part of Step 2 is manually selecting images for each synset.

Step 3 to Step 7 constructs the relatedness graph between each synset $S_j \in \mathcal{S} $ and each track
$M_i \in \mathcal{M}$. The time cost is $O(m \cdot k \cdot w \cdot u \cdot v )$. As discussed in the last paragraph, in practice, $u$, $v$ and $w$ are normally small, so the time cost is mainly decided by the product
of the number of candidate tracks $k$ and the number of selected synsets $m$. Step 1 to Step 7 is processed offline and only needs to be done once, we can take advantage of distributed computing to speed up the process.

Step 8 to Step 16 takes photo album as input and searches for the most related music in its music database
online taking advantage of the relatedness graph constructed in Step 3 to Step 7.  The time cost for the process
is $O(n \cdot |Q| \cdot T \cdot m \cdot k)$, where $n$ is the number of photos in input album and $T$ is the time
cost for computing visual similarity of two images. As same as the offline process, these steps can be speeded up
by distributed computing. The time complexity of our algorithm is mainly decided by the number of candidate tracks and the number of synset images, which shows our algorithm is quite scalable and fitful for online applications.

\section{Experiments and Results}
\label{sec:exp}
\subsection{Data Set}
For music, we crawl 4547 tracks from AllMusic website with their mood tags and metadatas like artist, album and genre, etc.
There are totally 130 different mood tags for these tracks, each track is labeled with one or several mood tags.
Then we use WordNet to expand each mood tag to a bunch of tags as a more robust representation of tracks.
For the 4547 tracks we crawled, each one is labeled by 2.04 AllMusic mood tags on average. After WordNet expansion, the average number of mood tags for one track is 15.97 which means the expansion step greatly improves the robustness of mood tag representations for tracks.
Table 1 shows some examples of tracks and their corresponding mood tag representations.
The crawled data is not complete, we use MusicBrainz \cite{8} to complement the metadata for each track.
For the data of image semantic label, we choose 25 synsets which we think will lead people to different kinds of feelings.
We select 20 typical images for each synset as its visual representation.
For statistics of Flickr, we use Flickr API\footnote{\url{http://www.flickr.com/services/api/}}
to get the global statistic data for all photos on Flickr. For our experiments, we choose 30 photo albums from Flickr
varies widely from holidays to scenery, from ceremony to art. Each album contains 10 photos.

\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{table}[!htb]
\caption{Tracks and their corresponding mood tag representations.}
\centering
%\addtolength{\tabcolsep}{8pt}
\begin{tabular}{| c | c | c | c |}
\hline
 Album & Track  & Mood tag representation\\
\hline
\raisebox{-4.8mm}[1.05cm][0.52cm]{\includegraphics[width=1.5cm]{ppp1.eps}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{``Abc" \\ by The Jackson 5}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{joyous, playful, fun, \\cheerful, etc.}} \\
\hline
\raisebox{-4.8mm}[1.05cm][0.52cm]{\includegraphics[width=1.5cm]{ppp2.eps}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{``Fade Into You" \\ by Nena}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{melancholy, sad, hypnotic, \\peaceful, etc.}} \\
\hline
\raisebox{-4.8mm}[1.05cm][0.52cm]{\includegraphics[width=1.5cm]{ppp3.eps}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{``Born To Run" \\ by Bruce Springsteen}} &
\raisebox{2.3mm}[0cm][0cm]{\tabincell{c}{theatrical, passionate, \\rousing, etc.}} \\
\hline

\end{tabular}
\end{table}

\setcounter{secnumdepth}{3}
\subsection{Experimental Setup}
We name our music recommendation system TuneSensor. For an input photo album, TuneSensor analyses the album images, searches for the most related music in its music database, and finally recommends it for the input album as output. Here we introduce TuneSensor via offline and online modules
as shown in Figure \ref{fig:arch}.
\begin{figure}[!htb]
\small
\centering
\includegraphics[width=1.0\textwidth]{arch.pdf}
\caption{The architecture of our music recommendation system for photo albums.}
\label{fig:arch}
\end{figure}

\subsubsection{Construct Cross-media Semantic Relatedness Graph}
The offline module constructs a cross-media semantic relatedness graph, which contains both image and music mood tag synsets as vertices, and the semantic relatedness between these synsets as edges. The graph is shown in Figure \ref{fig:graph}.
\begin{figure}[!htb]
\small
\centering
\includegraphics[width=0.7\textwidth]{graph.eps}
\caption{Cross-media semantic relatedness graph.}
\label{fig:graph}
\end{figure}

For music tag synset construction, we crawl music metadatas from AllMusic website, including name, mood tags, artist, genre and album of each track, then we use service provided
by MusicBrainz to complement the information for each track.
The reason we choose AllMusic as one of our data source is that it has more than 100 types of mood tags for music.
The mood tags are essential to our work. MusicBrainz is one of the semantic data sources about music, we can choose more semantic data sources like DBTune\footnote{\url{http://dbtune.org/}} and last.fm\footnote{\url{http://www.last.fm/}}
to expand the information of music.
We download music that we have metadatas from music download services like Google Music\footnote{\url{http://www.google.cn/music/}}.
In this way, our music database and corresponding metadata database are prepared. The process is shown in Figure \ref{fig:mdb}. We use WordNet to expand each music mood tag to a music tag vector F, as discussed in Eq. (4). Because mood tags are adjectives, we use WordNet synonyms relations to expand each tag.
The benefit of using WordNet is that the errors caused by ambiguous words and users'
preference of words are avoided effectively. In this way, each track is represented by several mood tag vectors.
\begin{figure}[!htb]
\small
\centering
\includegraphics[width=0.7\textwidth]{a1.eps}
\caption{Build music database.}
\label{fig:mdb}
\end{figure}

For image tag synset construction, We choose WordNet synsets with typical subjective feelings and select certain number of images for these synsets
which is similar to ImageNet. The difference is that we choose synsets based on our specific application requirements. Each synset can be converted as a semantic tag vector S discussed in Eq. (4). We choose WordNet synsets to organize our database of visual content because it is comprehensive and we can scale up the database conveniently according to the
structure of WordNet. Further more, we can take advantage of the ontology to build a better relevance model.
Similar with music synset expansion, each image tag synset can be converted into a semantic tag vector.

With the vector representation of image and music synsets, we can get the relatedness between
them by computing the relatedness between two vectors. We use Eq. (5) and Eq. (6) to complete the task.
The number of images and corresponding tags on Flickr is quite large so the statistics of Flickr is
reliable. The mood tags we use have an average of 89193.3 occurrences on Flickr. In the other hand, users naturally use both objective tags to describe the content of photos and subjective tags to describe the feelings of photos. So the relatedness of image content tags and music mood tags can be achieved by
taking advantage of the statistics of Flickr. In our work, we use gauss normalization for normalize function in Eq. (5)
and square root as weight function in Eq. (6) which performs well in practice. In this way, our cross-media semantic relatedness graph is constructed.

\subsubsection{Music Recommendation Using Relatedness Graph}
For a photo album as input, we
use LIRE(Lucene Image Retrieval) \cite{21} to compute the visual similarity between input images
in the album and the indexed synset images.
In our work, we use default image features of LIRE to build the index of images.
In this way, we get the relatedness between user album and each semantic tag vector S.
Using the constructed semantic relatedness graph, each track is assigned with a relatedness score with the input album. Finally, the track with the highest match score is recommended. At the same time, we show the information of the
recommended track so that users can find more similar tracks such as the ones sung by the same singer
or in the same album. So the system also presents a novel and interesting way for searching music
based on photos.

\subsection{Compared Algorihms}
Since little existing work investigates the problem of music recommendation for photo albums to our best knowledge, we compare our algorithm with two baselines as below.
\begin{itemize}
\item \textbf{Lower Bound (LS).} For this method, tracks are randomly selected for each target photo album. This is the lower bound performance of our problem. We refer it as \emph{Lower Bound} method.
\item \textbf{Manually Selection (MS).} For this method, tracks are manually selected for each photo album based on subjects' preferences. We refer it as \emph{Manually Selection} method.
\end{itemize}

Because the problem of music recommendation is relatively subjective, so these two baselines are very important to show the performance of our method. For the same reason, different people may prefer different genres of music, so we recommend several tracks with different genres to a photo album, each track is the most related one in its genre class. And user can choose his favorite genre from the recommendation tracks for each photo album.

\subsection{Evaluation Measure}
We invited 5 participants to do the test. For each recommended album-music pair, all the 5 participants will their labels as below.
\begin{itemize}
\item \textbf{Relevant.} The recommended music is considered relevant or suitable to be a background music to the target album, labeled with score 1.
\item \textbf{Irrelevant.} Otherwise, labeled with 0.
\end{itemize}
For each algorithm, the satisfaction $r$ is computed as the proportion of relevant album-music pairs in the whole recommended pairs.
\begin{equation}
r = \frac{\sum_{i}^{t}{\tau_i}}{t}
\end{equation}
where $\tau_i$ is the average labeling score for the $i$th test case and $t$ is the number of test cases.

\subsection{Results and Analysis}
\begin{table}[!htb]
\centering
\caption{Satisfaction of three recommendation methods.}
%\addtolength{\tabcolsep}{8pt}
\begin{tabular}{| c | c | c | c | }
\hline
 & Lower Bound & TuneSensor & Manually Select \\
\hline
 Satisfaction & 30.67\% & 68.67\% & 77.33\% \\
 \hline
\end{tabular}
\end{table}

Table 2 shows the satisfaction of the \emph{Lower Bound} method, our method TuneSeneor and \emph{Manually Select} method.
We can see that our method achieves a performance of 68\% satisfaction which is much better than the
random recommendation and is close to the performance of manual method. We can come to conclusion that our method indeed improve the performance of music recommendation for photo albums.
For testing runtime performance, we use 30 photo albums, each contains 10 photos. The test was done on a desktop computer with an Intel Core 2 Quad CPU with two 2.66 GHz cores and 8 GB RAM running Ubuntu10.10.
The average time using TuneSensor for recommending music for one photo album is 0.8 second which is very efficient and much faster than the manual way.
\begin{figure}[!htb]
\small
\centering
\includegraphics[width=1.0\textwidth]{ff2.eps}
\caption{Statistics of scores for albums. 0-5 means the percentage of people who are satisfied with the music recommended.}
\label{fig:ff}
\end{figure}

We have 5 participants to evaluate our results. Each album will get 1 score if one participant think
the music recommended is suitable for it. So each album will get a score between 0 and 5. We gather statistics
of the percentage of albums with score 0 to 5 which is shown in Figure \ref{fig:ff}. We can see that compared to the random
recommendation, we recommend much more music that most users are satisfied. Compared to the manual method,
we still have some albums that nobody is satisfied, but we recommend more music that all users are satisfied.
This means our method is able to recommend really great music for albums.

\begin{table}[!htb]
\caption{Results of our music recommendation for photo albums}
\centering
%\addtolength{\tabcolsep}{8pt}
\begin{tabular}{| c | c | c | c |}
\hline
 Photo Album & Mood Tags Matched & \multicolumn{2}{|c|}{Music Recommended}  \\
\hline
\raisebox{-4.5mm}[1.55cm][0.52cm]{\includegraphics[width=3.8cm]{pp2.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{romantic, peaceful, \\ dramatic, etc.}} &
\raisebox{-2.3mm}[1.5cm][0cm]{\includegraphics[width=1.5cm]{m2.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{``Because You Loved Me" \\ by Celine Dion}}\\
\hline
\raisebox{-4.5mm}[1.55cm][0.52cm]{\includegraphics[width=3.8cm]{pp1.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{sensual, fun, \\ sexy, etc.}} &
\raisebox{-2.3mm}[1.5cm][0cm]{\includegraphics[width=1.5cm]{m1.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{``Great Ball Of Fire" \\ by Jerry Lee Lewis}}\\
\hline
\raisebox{-4.5mm}[1.55cm][0.52cm]{\includegraphics[width=3.8cm]{pp3.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{calm, elegant, \\ mellow, etc.}} &
\raisebox{-2.3mm}[1.5cm][0cm]{\includegraphics[width=1.5cm]{m3.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{``Gentle On My Mind" \\ by Lisa Ono}}\\
\hline
\raisebox{-4.5mm}[1.55cm][0.52cm]{\includegraphics[width=3.8cm]{pp4.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{cheerful, happy, \\ sweet, etc.}} &
\raisebox{-2.3mm}[1.5cm][0cm]{\includegraphics[width=1.5cm]{m4.eps}} &
\raisebox{5mm}[0cm][0cm]{\tabincell{c}{``Sugar, Sugar" \\ by The Archies}}\\
\hline

\end{tabular}
\end{table}
Table 3 shows part of the results using our method for music recommendation. In this table, images in the first column are the photo albums asking for
background music. The related music mood tags found using our model are shown in the second column. The last column is the music we recommend and some
information about it. More recommendation cases can be experienced in our online demo \url{http://tunesensor.apexlab.org/}. From the cases we can see that our method can indeed find suitable music for photo albums.

\section{Conclusion and Future Work}
\label{sec:conclusionFuture}
In this paper, we proposed a novel method to recommend music for photo albums. We presented a cross-media semantic relatedness
model and introduced the architecture of our system. We compared the performance of our method with the random method and the manual method, which showed our method
was able to recommend great music for albums. For the future work, we plan to scale up the number of synsets and the number of images for each synset. In this way, we can describe more visual content and feelings exactly. At the same time, we must build a more robust model for image annotation part of our model.
We can also provide TuneSensor as applications for photo album services like Flickr and Facebook album in the future. In addition, music search engine queried by images can be developed based on our model.
\bibliographystyle{splncs03}
\bibliography{b}
\end{document}
