diff --git "a/papers/2401/2401.11113.tex" "b/papers/2401/2401.11113.tex" new file mode 100644--- /dev/null +++ "b/papers/2401/2401.11113.tex" @@ -0,0 +1,1164 @@ + +\documentclass[manuscript]{acmart} +\AtBeginDocument{\providecommand\BibTeX{{\normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} + + +\usepackage{xcolor} +\newcommand{\concatd}{\ensuremath{+\!\!\!\!+\,}} +\usepackage{multirow} +\usepackage{graphicx} + +\usepackage{subfig} +\usepackage{nicematrix} +\begin{document} + + + + + +\title{SPAND: Sleep Prediction Architecture using Network Dynamics } + + +\author{Maryam Khalid} +\affiliation{\institution{Electrical \& Computer Engineering Department, Rice University} + \streetaddress{} + \city{Houston, TX} + \country{USA}} +\email{maryam.khalid@rice.edu} + + + +\author{Elizabeth B Klerman} +\affiliation{\institution{Dept of Neurology, Massachusetts General Hospital, Harvard Medical School} + \streetaddress{} + \city{Boston, MA} + \country{USA}} +\email{ ebklerman@hms.harvard.edu} + + + +\author{Andrew W. McHill} +\affiliation{\institution{Sleep, Chronobiology, and Health Laboratory, School of Nursing, Oregon Institute of Occupational Health Sciences, Oregon Health \& Science University} + \streetaddress{} + \city{Portland, OR} + \country{USA}} +\email{mchill@ohsu.edu} + + + + +\author{Andrew J. K. Phillips} +\affiliation{\institution{Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University} + \streetaddress{} + \city{Clayton VIC 3800} + \country{Australia}} +\email{andrew.phillips@monash.edu} + + + + + + + +\author{Akane Sano} +\affiliation{\institution{Electrical \& Computer Engineering Department, Rice University} +\streetaddress{} +\city{Houston, TX} +\country{USA}} +\email{akane.sano@rice.edu} + + + +\begin{abstract} + +Sleep behavior significantly impacts health and acts as an indicator of physical and mental well-being. +Monitoring and predicting sleep behavior with ubiquitous sensors may therefore assist in both sleep management and tracking of related health conditions. While sleep behavior depends on, and is reflected in the physiology of a person, it is also impacted by external factors such as digital media usage, social network contagion, and the surrounding weather. In this work, we propose SPAND (Sleep Prediction Architecture using Network Dynamics), a system that exploits social contagion in sleep behavior through graph networks and integrates it with physiological and phone data extracted from ubiquitous mobile and wearable devices for predicting next-day sleep labels about sleep duration. Our architecture overcomes the limitations of large-scale graphs containing connections irrelevant to sleep behavior by devising an attention mechanism. The extensive experimental evaluation highlights the improvement provided by incorporating social networks in the model. Additionally, we conduct robustness analysis to demonstrate the system's performance in real-life conditions. The outcomes affirm the stability of SPAND against perturbations in input data. Further analyses emphasize the significance of network topology in prediction performance revealing that users with higher eigenvalue centrality are more vulnerable to data perturbations. + + + + + +\end{abstract} + +\begin{CCSXML} + + + 10010520.10010553.10010562 + Human-centered computing~Ubiquitous and mobile computing + 500 + + + 10010520.10010575.10010755 + Computer systems organization~Redundancy + 300 + + + 10010520.10010553.10010554 + Computer systems organization~Robotics + 100 + + + 10003033.10003083.10003095 + Networks~Network reliability + 100 + + +\end{CCSXML} + +\ccsdesc[500]{Human-centered computing~mobile computing} +\ccsdesc[300]{Applied computing~Health care information systems.} + +\keywords{social network, Graph convolution, graph neural networks, sleep, well-being prediction, contagion, wearable sensing, mobile computing, multimodal sensing } + +\maketitle + +\section{Introduction} +Sleep behavior is closely associated with physical\cite{sleep0}\cite{sleep01} and mental health\cite{sleep1}\cite{sleep2}, both as an influence\cite{disease1} and as an effect\cite{cause}. Short sleep duration can increase the risk of multiple diseases and worsen existing conditions such as cardio-metabolic dysfunction\cite{disease1}, obesity\cite{disease2}, hormonal imbalance, and diabetes\cite{disease3} and impact a person's functioning in daily life including impaired work efficiency\cite{sleepWork}\cite{sleepWork2}, fatigue, sleepiness, and performance deficits\cite{sleepWork3}. + + +To address these issues, mobile health (mHealth) applications are proposed for monitoring and providing appropriate interventions. The interventions that utilize behavior change techniques such as shaping knowledge, goals, and planning have been shown to have desirable effects\cite{sleep_reg_review} on sleep behavior and have great potential to mitigate the above-mentioned risks. These mHealth systems require accurate monitoring and prediction of sleep behavior. Sleep duration, when used in machine learning models as an input feature, can improve well-being prediction\cite{sleepSarah}. +The prediction of sleep duration itself, however, can be a challenging task. While time-of-day and recent sleep/wake timing are strong predictors for the next sleep episode +duration, several external factors, such as social interactions and digital media usage, can also impact sleep duration \cite{media1}\cite{media2}. The use of digital media can +impact sleep duration directly or can lead to a person changing their behavior based on the people in their social network. There is a complex dynamic between an individual's social network, their digital media interactions encompassing calls, SMS messages, and app usage, and the influence these factors exert on that individual's behaviors. + +Previous studies have established the existence of sleep behavior "contagion" in a social network. Using a network of 8349 adolescents to investigate the relationship between sleep behavior and drug usage\cite{sleep-contagion}, Mednick and colleagues identified clusters of people with poor sleep behavior and demonstrated, through statistical analysis, how network centrality impacts sleep behavior outcomes. Additionally, these investigators found a connection between a person's future sleep duration and their friend's current sleep duration\cite{sleep-contagion}. +Another aspect of the environment is weather which has also been shown to affect sleep duration \cite{weather3}\cite{weather4}\cite{weather5}. Weather impacts physical activity and sedentary behavior\cite{weather1}\cite{weather2}, which in turn impacts sleep behavior\cite{weather_sleep1}\cite{weather_sleep2}. Weather can also impact well-being and mood\cite{weather_mood}, both of which can affect sleep behavior and is, therefore, an important variable to consider when predicting sleep duration. + + +In this work, instead of estimating current sleep behavior, which is a well-studied topic in the published literature, we pose the problem of predicting the next night's sleep behavior from the current and past few days of data. Since short sleep duration is associated with poor performance and health conditions in people of all ages, we choose sleep duration as an attribute to quantify sleep behavior. Our work can theoretically be extended to the prediction of other attributes such as sleep timing or efficiency, or other health-related outcomes. + + \begin{figure} + \centering + \includegraphics[scale=0.52]{figures/framework.png} + \caption{Framework overview: Multi-modal data from phone and wearable device is integrated with graph networks, extracted from phone data, in a deep learning architecture to predict the next day's sleep duration label} + \label{fig:framework} + \end{figure} + +We exploit the relationship among social networks, digital media usage, and sleep duration in two ways. First, we use mobile phone data to capture digital media usage in the form of calls, SMS, and screen time (indicating app usage). Second, we incorporate the user's social network, based on calls and SMS usage, into the sleep prediction model as a graph architecture by integrating contagion through multi-user data aggregation. Our framework combines physiological data from wearable sensors, weather data, mobile phone usage, and social networks into a deep learning framework to predict next-day sleep duration labels (Fig.\ref{fig:framework}). Importantly, social networks extracted from mobile phone data do not incur additional user burden. + +To this end, we use Graph Convolution Networks (GCN) to aggregate features from the user's neighbors and predict sleep duration labels for the entire network. However, irrelevant nodes (e.g., nodes that do not contribute to a user's sleep duration behavior) or noisy graphs can hinder +prediction performance. To alleviate this problem, we incorporate an additional attention mechanism that filters out irrelevant nodes by paying more "attention" to important nodes by utilizing Graph Attention Networks (GAN). Another limitation of graph-based architecture is that it requires both a feature matrix and a graph network as inputs; this limits the size of graphs that can be handled by the model. However, in real life, the magnitude of social interactions often changes. To address this limitation, we utilize GEDD: Graph Extraction for Dynamic Graphs, an iterative algorithm based on the principle of graph convolution networks to transform a set of varying size graphs into a set of graphs with fixed predetermined size without any loss of the user or graph information. Lastly, we extract the temporal trends in the data through a Long Short-term Memory (LSTM) network module and integrate both multi-user aggregated features with extracted temporal trends in a deep learning architecture, abbreviated as SPAND, that predicts the next-day sleep duration labels for all the users present in the network. + +We design multiple experiments to evaluate the proposed model and answer two important questions. First, we quantify the benefit of integrating social networks by comparing the performance of the proposed architecture to two benchmarks that do not utilize the social network, but otherwise are very similar to the proposed model in terms of the number of layers, units in each layer, hyperparameters, input, and output. The same samples from the dataset are used for training and testing all four models for a fair comparison. +The evaluation results highlight a significant improvement provided by the integration of social networks. Between GCN and GAN, the attention-based model provides the highest accuracy. Second, we evaluate the impact of different design parameters on the performance of all models: specifically, the impact of network size and temporal memory is characterized through a set of experiments. + + +We further expand our analysis to investigate how the models perform in real-world scenarios where sensor data could be noisy or missing (e.g., the battery runs out or the sensor breaks down). Multi-user prediction models are more vulnerable because even if one user is producing low-quality data, the predictions of the entire network could be affected. To characterize this impact, we conduct a robustness analysis by perturbing the test data in different ways and observing the model's performance for the whole network as well as different parts of the network. Our results indicate that while GCN is prone to perturbations in the data, GAN is robust to these perturbations and has a negligible drop in performance except for nodes that have very large eigenvalue centrality. + +To summarize, this work makes the following contributions, +\begin{itemize} + \item We present SPAND: \underline{S}leep \underline{P}rediction \underline{A}rchitecture using \underline{N}etwork \underline{D}ynamics, that exploits contagion in sleep behavior for next-day sleep duration prediction. It utilizes rich multimodal data from wearables, mobile phones, surveys, and weather and integrates social networks in the prediction model. Furthermore, the architecture overcomes the limitations of noisy/missing-user graphs and large networks by leveraging an attention mechanism. + + + \item We conduct a thorough experimental evaluation to quantify the improvement provided by the integration of social networks. + + + \item We systematically analyze the model's robustness to data perturbations in real-world conditions. Our findings indicate that graph-based architecture is more robust to perturbations compared to non-graph-based architecture for multi-user predictions and that different parts of the network are impacted differently when a subset of the network is perturbed. + + \item We share a rich multi-modal time-series dataset about participants and their surroundings, including information from wearables, phones, surveys, and weather, collected from over 200 participants spanning 7500 days. Additionally, we provide the dynamic social network graphs for these participants constructed from call and SMS data representing diverse network topologies. The dataset is the first of its kind to integrate all four modalities with dynamic social networks in one longitudinal study. + +\end{itemize} + +\section{RELATED WORK}\label{relatedwork} + + + + +\subsection{Social contagion} +Contagion is "rapid communication of an influence (such as a doctrine or emotional state)"\cite{contagion_def}. Contagion in behavior is the +production of similar behavior in response to interaction with another person\cite{cont_def1}\cite{cont_def2}\cite{def3}. One example is the tendency to mimic and synchronize expressions, vocalizations, postures, and movements with those of another person. +Multiple works have explored how health outcomes such as obesity\cite{cont_ref1}, tuberculosis \cite{cont_ref2}, pneumonia\cite{cont_ref3}, and severe acute respiratory syndrome\cite{cont_ref4}, can spread in a network suggesting the existence of contagion in health behaviors\cite{cont_ref5}. +The work in \cite{sleep-contagion} suggests the contagion of sleep behavior in a social network. +An empirical study was conducted using data collected from students in grades +7–12 in the National Longitudinal Study of Adolescent Health \cite{NIHstudy} over 8 years. The dataset was used to extract a social network of 8349 adolescents to study the relationship between sleep behavior and drug usage. The study found clusters of poor sleep behavior; statistical analysis revealed that the probability of a person sleeping less than seven hours goes up by $11\%$ if their friend also sleeps less than seven hours. Further findings indicated that high network +centrality negatively impacts future sleep behavior. We extend this work to predict next-day sleep duration labels at a day-to-day resolution. + +The literature closest to the proposed work was presented in \cite{network_MH} that leveraged social networks for health label prediction\cite{network_MH}. Characteristics about the position of a user in the weekly social networks extracted from smartphone data (SMS exchanged), such as centrality, were used as input features to a binary classifier to predict whether an individual was depressed/anxious or not. We expand this work by integrating rich physiological, smartphone, and weather data in the prediction model. Our architecture also significantly differs from this work: instead of using network metrics as model input, we \textit{actively} use the network to perform information aggregation from multiple users and exploit contagion at a deeper level. + + + +\subsection{Sleep sensing technologies} + +In this section, we discuss the sensing technologies used to extract sleep behavior information. + +\subsubsection{Wearable technologies} +The gold standard to measure sleep is polysomnography\cite{poly} which collects multiple physiological parameters, two of which are the electroencephalogram (EEG) and electrocardiogram (ECG). However, it is expensive and burdensome to participants and staff. A large body of literature investigates learning sleep characteristics from the information-rich brain and heart activity signals collected using EEG\cite{eeg3}\cite{eeg4} and ECG\cite{eeg5}. However, the collection of EEG and ECG data requires wearing electrodes at multiple locations of the body which is not convenient in real-life settings. + + +As an alternative to EEG and ECG data, wearable devices are easy to use, less obtrusive, portable, low-cost, and can be worn without any difficulty when the individual chooses to go to bed/sleep. The efficacy of using actigraphy sensors to predict sleep quality was investigated in \cite{wear2}. Learning sleep attributes from smartwatches is also thoroughly investigated in previous works \cite{wear3}\cite{wearable11}. Multi-modal sensing\cite{ecg_wear1} extends this by combining data from the actigraphy sensor and ECG for understanding the time spent in different sleep stages. Our system is similar to most of these works, as we also utilize wearable acceleration sensors for predicting sleep duration. However, in addition to acceleration data, we incorporate the impact of skin temperature, skin conductance, surroundings, and digital media usage on sleep duration by utilizing weather and mobile metadata. + + +\subsubsection{Non-wearable low user-burden technologies} +Another interesting body of literature focuses on solutions in which the user does not have to wear any sensor at all. +For instance, a system that uses radio-signal transmitting devices in the room to monitor sleep biomarkers is non-invasive and low-burden \mbox{\cite{zero_effort}}\mbox{\cite{dopple_sleep}}. Another approach involves multiple sensors throughout a smart home to forecast sleep behavior\cite{noninvasive2}. However, these solutions are limited to controlled environments, can be costly, and are not effective when there are multiple people in the home or when the participant is not at home (e.g., traveling). + + + + +These limitations were overcome by work \cite{mobile_only}\cite{studentlife}\cite{tossnturn} in which only mobile phone data were utilized to learn sleep behaviors. However, these require the smartphone to be always placed close to the user so that features representing mobility, light, and activity can be extracted. Another alternative technology is a bed sensor e.g. EMFIT QS sleep analyzer, however, the current state of this technology does not provide performance comparable to wrist actigraphy\cite{mattress}. + +In summary, there are two main types of sensing technologies based on proximity to the user. Close proximity sensors such as smartwatches and mobile phones are used by many people and are relatively low-cost. Distant proximity sensors such as radio transmitters and IoT (Internet-of-Things) devices require specialized setup. Thus, there is a trade-off between user burden and sensor burden (e.g., cost, ubiquity) when choosing the sensing technology. In future work, insights from \cite{mobile_only}\cite{zero_effort} and \cite{saeed_mobile_only} may be useful in reducing the burden of wearing the device. + + + +\subsection{Modeling for learning sleep attributes} + + \subsubsection{Estimation using rule-based/heuristic models} + In \cite{mobile_only}, mobile phone data and a 2-process sleep regulation model estimated sleep timing, showing improved performance with expert knowledge. In a related study \cite{saeed_mobile_only}, phone usage data inferred sleep duration using personalized rule-based algorithms. SleepGuard \cite{wear3} developed an end-to-end architecture for smartwatch data to estimate sleep stages, postures, movements, and sleep environment, utilizing heuristic-based models or a linear-chain conditional random field\cite{wear4}. + + + + \subsubsection{Estimation using machine learning models} + Machine learning models have been employed to overcome the limitations of rule-based approaches, which can lack generalizability to different populations and non-stationary dynamics of the environment. + + Linear regression models using smartphone-derived features related to mobility, light exposure, and usage were applied to infer sleep duration \cite{StudentlifeAlgo}\cite{StudentLifeAlgo2}. EEG and ECG data have been used to identify sleep disorders and classify sleep stages using various models, including k-nearest neighbor classifiers, convolutional neural networks (CNN)\cite{eeg2}\cite{eeg3}\cite{eeg4}, deep neural networks\cite{eeg5}, and support vector machines\cite{ecg2}. However, these models often relied on handcrafted features and may have limited generalizability. +Another work combined \cite{ecg_wear1} actigraphy data with ECG data for calculating the duration of time spent in different sleep stages using multiple machine learning models. + +In this paper, instead of monitoring or estimating current sleep behaviors, we aim to predict a sleep attribute for the next day; this is a more complex learning problem but with greater translational potential for sleep management and mHealth applications. To achieve this objective, we develop a deep learning architecture that predicts the next day's sleep duration for multiple users from wearable, mobile, and weather data. + +\subsubsection{Prediction using machine learning models} + +Previous research has investigated the use of actigraphy sensors and various deep learning models like multilayer perceptron (MLP), Recurrent Neural Network (RNN), Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM-RNN), and a time-batched version of LSTM-RNN (TB-LSTM) to predict sleep quality labels \cite{wear2} or next day's sleep duration\cite{wearable11}. Our approach preserves the sequential nature of daily features and trends by utilizing an LSTM network. In addition to wearable data, we incorporate the impact of surroundings and digital media usage on sleep by using weather and mobile data. Moreover, our system performs multi-user predictions while considering contagion effects between users in a graph-based deep learning architecture. + + + +In all the above-mentioned works, feature extraction is critical. However, in real life, data sources can be noisy and data can be missing, which can impact the quality of extracted features. In order to investigate the performance of our system in the wild, we provide a thorough robustness analysis for the proposed model. + + + + +\section{Data Collection}\label{dataset} + +\begin{table}[] +\centering +\resizebox{0.45\columnwidth}{!}{\begin{tabular}{|cc|c|cc|} +\cline{1-2} \cline{4-5} +\multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Gender}}} & & \multicolumn{2}{c|}{\textbf{Age}} \\ \cline{4-5} +\multicolumn{2}{|c|}{} & & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Mean age = 21 years with \\ standard deviation= 1.3 years\end{tabular}} \\ \cline{1-2} \cline{4-5} +\multicolumn{1}{|c|}{Male} & 158 & & \multicolumn{1}{c|}{17-22} & 240 \\ \cline{1-2} \cline{4-5} +\multicolumn{1}{|c|}{Female} & 92 & & \multicolumn{1}{c|}{22-28} & 10 \\ \cline{1-2} \cline{4-5} +\end{tabular}} +\caption{Demographics Statistics} +\label{tab:demo-stats} +\end{table} +The dataset was collected from 250 participants (total ~7,500 days of data) from 2013 to 2017. Participants were asked to identify 5 people whom they usually call or SMS at least once a week and who meet the following criteria \mbox{(i)} undergraduate student \mbox{(ii)} own an Android smartphone \mbox{(iii)} age between 18 and 60 years. People could not participate if they met any of the following conditions \mbox{(i)} people who have a problem wearing wrist sensors \mbox{e.g.,} irritated skin on the wrist, etc \mbox{(ii)} pregnant women \mbox{(iii)} freshmen (cohort 1 and 3 only) \mbox{(iv)} people who traveled more than one time zone away one week prior to the study or have plans to travel more than once zone away during the protocol (except for cohort 7). Some participants (not exhaustively) need to be connected through call or SMS at least once a week. For the last two cohorts, their friends who were not enrolled in the main study also joined as friends of participants even though they were not Android users. We preferred recruiting bigger groups of people who were friends with each other. +Each person participated for ~30 consecutive days (cohorts 1-6) or ~110 consecutive days (cohort 7) during one academic term.; cohorts 1-6 had more participants that participated for a shorter duration, while cohort 7 had the least number of participants who were studied the longest. All participants in the study were students at one university. The demographic statistics are listed in Table.\ref{tab:demo-stats}. Each academic term is considered a cohort and cohort number is assigned in chronological order i.e. cohort 1 was the first one and cohort 7 was the last one in the study. The number of participants in each cohort from 1-7 was 20, 48, 46, 47, 40, 35, and 15, respectively. There were available data for 15-25 days from most participants in cohorts 1-6 and for 40-60 days from most participants in the last cohort. + +During the study period, the participants (i) wore a wrist device that collected skin conductance, skin temperature, and acceleration (Q-sensor, Affective, USA) on their dominant hand; (ii) had an app (funf\mbox{\cite{funf}}) installed on their phone that collected metadata of calls, SMS, screen usage, and location. During these years of study, there were very few phone apps on which SMS or calls could be made, and therefore, our data collection about texting and calls made using the phone was considered to be complete; (iii) answered morning and evening daily surveys about drugs and alcohol intake, sleep time, naps, exercise, and academic and extracurricular activities. At the beginning and the end of the study, participants completed social network surveys including questions about social activity partners, emergency contacts, and roommates to identify people that participants would often interact with. For more details about the data collected in the study, please refer to Supplementary Material section \ref{supp}. + + +The study protocols and informed consent procedure were approved by [institutions (will replace after acceptance for anonymous review)]. All participants signed an informed consent form. All methods were performed in accordance with the relevant guidelines and regulations complying with the declaration of Helsinki. To protect study participants’ privacy and consent and since some of the participants did not consent to sharing their data with the third-party researchers, raw data will not be publicly available. +However, deidentified data including the processed input features, labels, and graph networks extracted from phone data, which are used to evaluate the models in this work and would be needed for reproducibility, are available online\cite{datasetSpand11}. + +The final dataset was deidentified and each participant was assigned a unique ID. Windowed features extracted from wearable device data contained multiple features for activity and physiology. Phone calls, SMS, and screen time were also summed over windows and used as features (e.g., number of events, timing, number of people communicated, distance traveled within a day or within different timeframes). Multiple features explaining weather conditions (e.g., temperature, precipitation, cloud cover, pressure, humidity, sunlight, and wind) were added to the feature matrix. After removing all sleep-behavior labels, such as sleep efficiency, data from all modalities were concatenated to create one feature matrix with 317 features (details in Supplementary Material \ref{feat}). +This dataset is available at \cite{datasetSpand11}. + + + + + + +\section{SPAND: Sleep Prediction Architecture using Network Dynamics} + +\subsection{Problem description} +We pose a problem of predicting sleep labels for the next day based on wearable and mobile data for the current and past few days. Formally, the multi-modal data with $m$ features for a given day $k$ is represented by the vector $x[k]\in \mathbb{R}^{m} $ and the corresponding sleep duration in minutes is represented by $s[k]$. The feature matrix $X[k]$ is composed of past $L$ days of feature data, where $L$ is an indicator of temporal memory, and the sleep label $y[k]$ is generated from sleep duration $s[k]$ (mins). The recommended sleep duration for young adults is between $8-10$ hours\cite{duration1}\cite{duration2}\cite{duration3}, so we choose a threshold of 8 hours for differentiating between good and poor sleep behavior. Also, the sleep duration distribution in the dataset is relatively balanced around 8 hours. +The sleep duration for our collected dataset had a mean of 7.2 hours with a standard deviation of 2.1 hours. + +\begin{equation}\label{input} + X[k] = \{x[k-L-1],x[k-L],...x[k-1]\} +\end{equation} + +\begin{equation} + y[k] = + \begin{cases} + 1 & \text{if s[k]>=480}\\ + 0 & \text{otherwise} + \end{cases} +\end{equation} +The objective of this work is to develop a data-driven model $f(.)$ parameterized by $\theta$ that can predict sleep labels from the feature matrix $X$ and participants' social network represented by $\mathcal{G}$, +\begin{eqnarray} +\mathcal{L} = \arg \min_{\theta} ||y-f(X,\mathcal{G},\theta)||_2 +\end{eqnarray} + +\subsection{Graph Network Construction} +We extract participants' social networks from mobile phone data to analyze their impact on sleep behavior. While an alternative could be using social media friend networks, these networks are not always indicative of real interaction. Participants may be connected to people they don't interact with, or to those who do not influence their behavior. Additionally, social media networks are larger and noisier, leading the model to aggregate irrelevant information and negatively impact prediction accuracy. + + +We validated social interactions among study participants using data from pre- and post-study social network surveys. These data allowed us to quantify the closeness between participants and create graph networks, which are detailed in Supplementary Material section \ref{supp-net}. Our findings affirm the presence of social interactions and phone communication among the study participants. + + + +We represent a participant's social network information as a graph network\cite{networks}. Graph $\mathcal{G}$ is composed of two main components: nodes $\mathcal{V}$ and edges $\mathcal{E}$, + $\mathcal{G} = (\mathcal{V},\mathcal{E})$. The nodes, also known as vertices, are participants who were part of the study at a given time. Since the study was conducted in seven cohorts, we cluster all participants in each cohort as part of the same graph in the initial processing. The edges are connections between nodes and are indicative of how close two participants are. We model the edges as weighted links such that the weights are proportional to their interaction. For convenience with mathematical operations, we utilize the matrix representation of a graph known as an adjacency matrix +$\boldsymbol{A}$ where the value at $i^{th}$ row and $j^{th}$ column is represented by $A_{ij}$, + +\begin{equation} + A_{ij} = + \begin{cases} + w_{ij} & \text{an edge $\mathcal{E}_{ij}$ exists from $\mathcal{V}_i$ \; to $\mathcal{V}_j$ }\\ + 0 & \text{otherwise} + \end{cases} +\end{equation} +And $w_{ij}$ represents the weight of an edge between node $i$ and node $j$, where $i,j \in \{1,2,..,|\mathcal{V}| \}$. + +We construct two graphs: the call graph $\mathcal{G}_c$, represented by adjacency matrix $\boldsymbol{A^c}$, and the SMS graph $\mathcal{G}_s$, represented by $\boldsymbol{A^s}$. We design them based on call and SMS metadata, respectively, collected over a time interval $[0, T]$. Representing an incoming call of duration $d$ seconds from participant $\mathcal{V}_i$ to $\mathcal{V}_j$ at time $t$ by $\mathcal{C}_{ij}[t]$ , +\begin{equation*} + \mathcal{C}_{ij}[t] = + \begin{cases} + d & \text{$\mathcal{V}_i$ calls $\mathcal{V}_j$}\\ + 0 & \text{otherwise} + \end{cases} +\end{equation*} + + +\begin{equation} + A^c_{ij} = \sum_{t=0}^{T}\mathcal{C}_{ij}[t] +\end{equation} + +For text messages, we consider two types of incoming messages: normal SMS with a text message body (Class 1 message), and Flash SMS with no message body (Class 0 message). Denoting an SMS from participant $\mathcal{V}_i$ to $\mathcal{V}_j$ at time $t$ by $S_{ij}[t]$, + +\begin{equation*} + S_{ij}[t] = + \begin{cases} + w_1 & \text{$\mathcal{V}_i$ sends $\mathcal{V}_j$ a Class 1 message}\\ + w_2 & \text{$\mathcal{V}_i$ sends $\mathcal{V}_j$ a Class 0 message}\\ + 0 & \text{otherwise} + \end{cases} +\end{equation*} +Prioritizing Class 1 messages because of their stronger interaction $w_1 > w_2$, we construct the SMS graph, +\begin{equation} + A^s_{ij} = \sum_{t=0}^{T}S_{ij}[t] +\end{equation} +In the experimental evaluation, $w_1$ and $w_2$ were chosen to be 1 and $0.5$, respectively. Class 1 messages are given greater importance than Class 0 messages due to the presence of real conversations in Class 1 exchanges indicating a stronger interaction. +For each node in the adjacency matrix, there is associated feature data $X$ and label data $y$. Only phone data is used to construct these networks; pre and post-study surveys do not contribute to these graphs. + +\subsection{SPAND: Proposed Architecture} + +We develop a system that integrates information from multiple participants in a social network such that the contagion in sleep behavior can be utilized to better predict the next day's sleep duration. + +\subsubsection{Multi-user context aggregation} +\begin{figure}[h!] + \includegraphics[scale=0.37]{figures/GCN_1} + \caption{Understanding the principle behind deep GCN architecture. The top figure represents a multi-layer GCN network. The bottom figures show how information aggregation is performed in each layer for a given user $U_1$.} + \label{GCN-1} +\end{figure} + + +In order to learn from both participants' data and their social networks, simple deep learning architecture such as neural networks cannot be used. The adjacency matrix representing the graph is permutation-invariant; i.e., the relative placement of nodes in the matrix is irrelevant. However, a simple neural network that uses the adjacency matrix as input would consider all permutations of the same matrix to be different inputs. Such an architecture cannot utilize graph connections between friends to aggregate information from the feature matrix in a systematic way such that the model accounts for both the user's and their friend's data when making a prediction. + + +To overcome these issues, we leverage graph convolution networks (GCN), which are inspired by spectral convolution on graphs\cite{GCN}. GCN provides a layer-wise linear propagation rule that allows the network to learn from graphs. Spectral graph convolution is the convolution of any signal $x$ with a filter $g$, where the filter $g$ is derived from the graph. In order to compute spectral convolution, we need the laplacian and degree matrix of a graph. The degree matrix $D$ is a diagonal matrix containing degrees of the nodes on the diagonal where the degree of a node is a sum of incident edges. +More details on the GCN layer can be found in\cite{GCN}. The input to the first layer is the node feature matrix $X=[x_1,..x_N]$ containing data for all users in a graph, where $x_i \in \mathbb{R}^{m}$ is the feature vector for user $i$ and $N$ is graph size. + + +To predict sleep labels using this model, labels for each participant/node are used to compute the loss, and the model parameters are learned through forward and backpropagation. The training process and parameter tuning are similar to conventional neural networks. The only difference is that in addition to a feature matrix $X$, a graph adjacency matrix $A$ computed through call and SMS interaction data is also used as input to the model. + +A deep model consists of multiple GCN layers stacked in a sequence. The first layer aggregates information from first hop neighbors, as shown in Fig. \ref{GCN-1}. Consider the participant $U_1$ shown in a black circular node whose multi-hop neighbors are represented by yellow and red nodes. Blue nodes represent participants who are not connected to $U_1$ at all. The bottom left figure represents the working principle on layer 1. In layer one, information from only first-hop neighbors in yellow is aggregated and passed on to the next layer. In the second layer shown on the bottom right, friends of friends of $U_1$ or its second-hop neighbors also contribute. The third layer, not expanded in the figure, also extends the same principle and aggregates information from third-hop neighbors. The blue nodes that are not connected to $U_1$ never contribute no matter how deep the network is. To summarize, an $\mathcal{H}^{th}$ hidden GCN layer aggregates features from up to $\mathcal{H}$-hop neighbors. + +A key issue with this architecture is that the aggregation is solely based on edge characteristics. When the edges in constructed graphs are noisy or are constructed from heuristics that do not capture contagion in the sleep labels being investigated, information aggregation can actually negatively impact the performance. For instance, graph networks created from social media activity can contain connections with people who do not actually impact a participant's well-being. Thus, architecture learning solely on edge characteristics is not robust to noise. Even when there is no noise, dense graphs can contain irrelevant connections that harm the prediction performance. While there is proof for the existence of contagion, details are not known. Some people can mimic the sleep behavior of their friends and some do not, and the reasons are not known. A well-defined objective model for contagion is not available either. Thus, it is important to consider other factors that reflect the similarity between two participants when aggregating information. + + +\subsubsection{Attention-based Context Aggregation} +To overcome the above-mentioned issues, we modify the aggregation process and deploy an attention mechanism to identify \textit{important} contributing participants. To achieve this goal, we incorporate graph attention networks (GAN). GAN considers two factors when combining information from multiple participants: graph edges and neighbor importance based on features. For a given participant $u_i$, GAN first considers all its neighbors as indicated by the adjacency matrix. In the second step, it assigns a significance coefficient to each neighbor that reflects the contribution that the node has in prediction for $u_i$. + +GAN performs information aggregation in two steps. For a graph of size $N$ with nodes $u_i$, $i\in\{1,2,..,N\}$, the input to first layer is a stacked feature matrix $X = [x_1,x_2,...,x_N]$, where $x_i\in \mathbb{R}^{m} $ represent node $u_i$'s feature vector. The first step applies a learnable linear transformation $W\in \mathbb{R}^{\tilde{l} \times m}$ to a feature matrix to obtain a high-level representation of the feature space, +\begin{equation} + WX = [Wx_1,Wx_2,....,Wx_N]=[\tilde{x}_1,\tilde{x}_2,....,\tilde{x}_N] +\end{equation} +In the second step, a significance coefficient $e_{ij}$, which represents how important node $u_j$ is to $u_i$, is computed. A shared attention mechanism $a : \mathbb{R}^{m} \times \mathbb{R}^{m} \rightarrow \mathbb{R} $ is applied to the transformed feature vectors\cite{GAN}, +\begin{equation} + e_{ij} = a(\tilde{x}_i, \tilde{x}_j) +\end{equation} +The attention mechanism used in this work applies a non-linear activation $\sigma$ to a product of a trainable kernel $g \in \mathbb{R}^{2m}$ and concatenated feature vectors, +\begin{equation} + e_{ij} = \sigma (g[x_i \ensuremath{+\!\!\!\!+\,} x_j]) +\end{equation} +where $ \ensuremath{+\!\!\!\!+\,} $ represents the concatenation operator. + + +To integrate graph structure in this attention mechanism, these coefficients are only computed for the first-hop neighbors. In addition, to compare different neighbors and have stable information aggregation, the coefficients are normalized across all neighbors. Finally, the normalized coefficients are used to compute a weighted aggregation of features from all over the node $u_i$'s neighbors, + + +The attention mechanism mitigates the impact of non-contributing nodes in dense graphs and is robust to noisy edges. This architecture also offers higher generalization to graph structures not seen by the model during the training process. + + +\subsubsection{Dynamic user distribution in graphs}\label{GEDD} + + + + + + + +In our dataset, the size of the extracted social networks varied as the number of participants in each cohort, and their connections were different. The variation in the number of participants per cohort is challenging because the proposed graph-based models require both features and graph networks as input. Both these inputs \textit{fix} the size of the input layer. When the number of participants changes, the size of the graph deviates from the model's pre-determined input size. + +To overcome this problem of varying network sizes, we use an algorithm called Graph Extraction for Dynamic Distribution (GEDD)\cite{gedd}. GEDD is a connected component-based method that converts large dynamic graphs into a set of small graphs of size equal to the model's input size $\eta$. +GCN performs feature aggregation from up to k-hop neighbors in the $k^{th}$ layer and no aggregation from users that are neither direct nor indirect neighbors. GEDD exploits this concept for extracting graphs of size $\eta$ through connected components. A connected component of a graph is a subgraph in which each node is connected to another through a path\cite{graph}. For a graph with $\tilde{N}$ nodes $ \mathcal{G_{\tilde{N}}=(V,E) }$, there are $\omega$ connected components with $1\leq \omega\leq \tilde{N}$. When $\omega=1$, all the nodes in $\mathcal{G_{\tilde{N}}}$ are connected, and when $\omega=\tilde{N}$, all nodes are disconnected and have a zero degree. The breakdown of graphs in connected components will result in subgraphs of varying sizes. Let $\Phi_i$ +represent the $i^{th}$ connected component, +\begin{equation*} + \Phi_i = \{\mathcal{V}^j,\mathcal{E}^j\} \;\; i={1,2,..\omega};\; j={1,2,,...\tilde{N}} +\end{equation*} +and $|\Phi_i|=q_i, 1\leq q_i \leq \tilde{N}$ represent the size of the component. +First, the components are divided into two containers, Main container $\mathbb{D}$ and residue container $\mathbb{F}$, based on their size. The former will contain subgraphs of size $\eta$ and the residual will contain graphs of size $r< \eta$. This leads to three scenarios, +\begin{itemize} + \item when $q_i=\eta$, add $\Phi_i$ to $\mathbb{D}$ + \item when $q_i<\eta$, add $\Phi_i$ to $\mathbb{F}$ + \item when $q_i>\eta$, break $\Phi_i$ into $\mathcal{Q}=\lceil \frac{q_i}{\eta} \rceil$ subgraphs $\Phi_i^b$ where $b={1,2,..\mathcal{Q}}$. The large component is broken such that, + \begin{equation*} + \Phi_i^b = + \begin{cases} + \eta & \text{ $b={1,2,..\mathcal{Q}-1}$}\\ + q_i \bmod \eta & \text{$b=\mathcal{Q}$} + \end{cases} + \end{equation*} + The subgraphs that satisfy the first condition in the above equation $\{\Phi_i^1,\Phi_i^2,...,\Phi_i^{\mathcal{Q}-1}\}$ are added to $\mathbb{D}$ and $\Phi_i^\mathcal{Q}$ to $\mathbb{F}$. +\end{itemize} +Once the components are divided between two containers, the main container is ready to be fed to the model. During the process of creating smaller graphs, stronger ties are given preference, and the graph is broken at weaker ties. For the residue container with all subgraphs smaller than $\eta$, the algorithm concatenates multiple subgraphs to create size $\eta$ subgraphs. There is still some residue left at the end when the \textit{total} number of nodes in $\mathbb{F}$ are less than $\eta$. For this last set, the algorithm uses repetition of nodes to create a final size $\eta$ subgraph. + +\subsubsection{Temporal Dynamics} +While graph-based models proposed in earlier sections are able to learn from multiple participants, the data collected from wearable and mobile phones also contain latent temporal trends for which the data need to be analyzed over a period of time. When utilizing multi-modal data for sleep behavior prediction, it is important to realize that instantaneous values of many sleep-related indicators might not be very informative on their own. Several factors lead to a certain state. For example, today's sleep duration may not be explained by data collected during that day but instead from the temporal dynamics of the same features for the past few days. To integrate these dynamics into the model, we develop a spatiotemporal model, that captures multi-participant spatial domain through graph-based networks and for the temporal domain, we employ long-short term memory (LSTM) networks \cite{lstm}, with recurrently connected memory modules. + + +For the sleep duration label prediction problem, we extract the sequential information in features to predict the sleep label $y$. For a given participant, let $x[k]\in \mathbb{R}^n$ and $y[k]$ represent the stacked feature vector and sleep label for day $k$ respectively for all users. If $L\in \mathbb{Z}^+$ is the length of the sequence, then we create an $n\times L$ sequence matrix, +\begin{equation} + S_k^L =\big[x[k-L]] ,x[k-L+1],...x[k]\big] +\end{equation} +This sequence matrix serves as an input feature matrix for which we predict the future sleep label $y[k+l]$. Here $l \in \mathbb{Z}^+$ represents how far in the future we want to make the prediction. The tuples $(S_k^L,y[k+l])$ are used to train an LSTM network in a supervised fashion. The model weights are learned through conventional forward and backward propagation through the model with gradient descent. + + +\subsubsection{End-to-End Model} +\begin{figure} + \centering +\includegraphics[width=0.9\textwidth]{figures/framework_updated.png} + \caption{SPAND Architecture. The top module based on GAN takes both the feature matrix and graph as input. The layer operates on graphs by applying attention shown by blue dashed links and original graph links are shown in solid green lines. The bottom module takes sequential data as input and extracts temporal trends. Finally, embedded features from both modules are combined in the aggregation module that produces predicted sleep labels.} + \label{fig:arch} +\end{figure} +The complete architecture of SPAND is shown in Fig.\ref{fig:arch}. +The input to the model is a 3-tuple object $I = (X[k],S^L_k,A_N^k)$, where $X[k]= \left[ x_1[k],x_2[k],...x_N[k] \right]$ is a matrix containing feature vectors of $N$ participants each represented by $x_i \in \mathbb{R}^m$ for day indexed by $k$. The sequence data fed to the LSTM unit is represented by $S^L_k$. The graph network corresponding to the $N$ participants in feature matrix $X[k]$ is represented by adjacency matrix $A_N^k$. This corresponding data label is sleep duration for all $N$ participants for the next day represented by $y[k+1]$. +After extracting temporal dynamics and multi-participant aggregated features in an embedded space, the output of both modules is normalized and concatenated before being fed to a small network of dense layers that produce the final output. + +\subsection{Performance Evaluation} +Multiple experiments are conducted to evaluate different aspects of the proposed model. Firstly, we quantify the improvement in prediction, if any, provided by aggregating information from multiple participants of a social network. We further investigate the impact of different network sizes and temporal memory on prediction performance. The preliminary evaluation with both SMS and Call graphs did not find much difference in the performance. However, we prefer to report the result for SMS graphs because they are more dense, thus containing a wider distribution of network topologies (and therefore centralities). This is helpful in obtaining insights into how the user's position in the network impacted the model's robustness. + + + +\subsubsection{Performance Comparison Models} +It is important to benchmark the proposed method against models that do not integrate the social network in the prediction of sleep duration behavior to quantify the impact of network integration. To achieve this goal, we consider two baseline models: +\begin{itemize} + + \item \textbf{CONV-LSTM: } As mentioned in section \ref{relatedwork}, multiple works use convolutional neural networks in sleep behavior estimation models from EEG and wearable data. Also, in recent work \cite{han2}, CNN was utilized to aggregate information from multiple modalities for well-being prediction. Thus, we utilize CNN to learn the multi-participant spatial information and LSTM for temporal dynamics, followed by dense and dropout layers. This architecture is identical to the proposed method except that the graph attention network is replaced by CNN. + + + \item \textbf{LSTM only:} In order to observe the improvement provided by graph integration, we also evaluate the model with LSTM layers\cite{lstm_ref} only and no GAN. Similar to the proposed model, the LSTM layer is followed by multiple dense and dropout layers. + +\end{itemize} + + + +\subsubsection{Machine Learning Pipeline}\label{ml-pipe} +There are three main steps performed in preprocessing the data: processing missing data, removing the outliers, and standardizing the multimodal data. The details of preprocessing are provided in Supplementary Material section \ref{preprocess}. +After preprocessing the feature data and labels, the data are aligned with the corresponding graphs as each sample of $N$ participants has an associated 3-tuple: $(X[k], S^L_k, A_N^k)$. In this tuple, $k$ is an index for time(day), $L$ represents temporal memory and $N$ represents the number of users in the network. + +The data for all the users across the cohorts are combined in one batch where each sample of the batch is composed of a graph network for a given day, the corresponding daily data feature vector for all users, and the sequential data for the past $L$ days for all those users. Once the batch of 3-tuple data samples is created, it is split into training and test sets in two ways: +\begin{itemize} + \item \textbf{Random Split: } The dataset is randomly split into training and test sets. We use random splitting to avoid selection bias. Around $50\%$ of data is used for training, $10\%$ for validation, and the remaining $40\%$ for testing with SMS graphs. The same training data is used to train all 4 models and then the same test data is used for testing across all models. This ensures that a fair comparison is conducted between the models. In order to mitigate the impact of model sensitivity to parameter initialization and depict the stability of the model, we use bootstrapping\cite{bootstrap}\cite{bootstrap2} and repeat the training and testing procedure 25 times and report average results along with the standard deviation across trials. + + \item \textbf{Leave-one-cohort-out (LOCO): } Random split introduces a chance of data from the same subjects appearing in both training and test set. To conduct a subject-dependent evaluation, we randomly choose one cohort as a test set and use the remaining cohorts for training and validation sets. Again the same training data is used to train all 4 models and then the same test data is used for testing across all models. For bootstrapping this process, the process is repeated six times without test set replacement i.e. the test set cohort is different in all trials. Since subjects in all cohorts are different, this testing is also equivalent to leave-subjects-out cross-validation. + +\end{itemize} + +To reduce the computation time, we use early stopping in the training process with a patience value of 30 epochs and minimum increment $\delta =0.01$. This ensures that if validation loss is not changing for more than 30 epochs by more than $\delta$, the training process stops. We utilize an ADAM optimizer with the same learning rate (0.001) for all three models. + +\subsubsection{Evaluation Metrics} +The output of the model is the sleep label which is a binary variable for all the participants in the graph. +We report the accuracy score (0-1), which is computed by dividing the total number of correct predictions by total predictions. To establish statistical significance between different model's performance, we conduct an Analysis of Variance (ANOVA) test\mbox{\cite{anova}}. ANOVA compares the performances of the four models across all trials and tests the null hypothesis that there is no statistically significant difference between the performance of different models. If the p-value is less than $\alpha=0.05$, we reject the null hypothesis. For further pair-wise comparison between proposed models and benchmarks, we conduct Tukey HSD (honestly significant difference) test\mbox{\cite{tukeyHSD} }for each pair between the proposed model and benchmarks. + + + + + + + +\section{Robustness Analysis against Data Perturbations} + Graph-based architectures exploiting contagion to complement the prediction model can expose the model to vulnerabilities caused by noisy or missing data. There are two key challenges in the development of a robust system in this context. First, wearable and mobile sensors provide high-resolution multi-modal data allowing us to train the system on a large feature space. However, in real-world scenarios, the larger feature space is more prone to sensor noise and missing data. Secondly, noisy graphs with irrelevant edges can also degrade the performance by aggregating information from nodes with missing or noisy data. To investigate the impact of perturbations in the data, we conduct a systematic series of experiments. + +To this end, we create artificial perturbations in the testing data from multiple dimensions and utilize different metrics to identify how different parts of the network are affected by these perturbations. For comparison, we repeat the same process for benchmark models defined in the previous section and report their performance. + +\subsection{Artificial Perturbations} +\begin{figure}[t] + \includegraphics[width=15cm]{figures/pert} + \caption{Different scenarios for perturbing the system. The circles represent nodes of the graph and the corresponding array represents a feature vector for that participant. The red arrow represents perturbation and red nodes indicate that those participants are perturbed. Yellow nodes are in their original state. (a) The original state of the system without any perturbation. (b) Randomly chosen 2 features $f_2$ and $f_4$ in red color are perturbed in all participants. (c) Randomly chosen features in further randomly chosen 2 participants $u_2$ and $u_4$ are perturbed.} + \label{pert} +\end{figure} +The data are perturbed at two levels, feature level and participant level. For feature level perturbations, we assume that the feature $f_i$, where $i\in \{1,2,..m\}$, follows an underlying univariate Gaussian distribution, $\mathcal{N}(\mu_{f_i} ,\sigma_{f_i} )$. After estimating the parameters of distribution from data, the perturbation is created by replacing feature data with randomly sampled values from the learned distribution with extended standard deviation. Note that if we create aggressive perturbations by replacing data with values that the feature does not often take, then those would be detected as outliers and filtered out in the preprocessing phase. Instead, we create noise and/or missingness by imputing values that would not be detected in the preprocessing phase or would be actually used for imputation if data are missing. + +We learn the parameters $(\mu_{f_i},\sigma_{f_i})$ of this distribution (mean and standard deviation), using the maximum likelihood estimation (MLE) \cite{prob}. The derivation for the estimated parameters is provided in supplementary material \ref{robust-noise}, +We expand the distribution slightly by increasing the variance by a factor of three for generalization. +Once the distributions of features are learned, we investigate three cases. For ease of understanding, let us overview the original state of the system as shown in Fig. \ref{pert}(a). The yellow circles represent the participants $u_j$ connected in a network and each participant has an associated feature vector $x $ consisting of $f_i$ where $i \in \{1,2,..,m\}$. + + In the first scenario, we consider feature-level perturbation across all participants. First, a random subset $\tilde{I}_f$ of size $\tilde{m} 0$}\\ + 0 & \text{x = 0} + \end{cases} +\end{equation} + +Degree centrality assigns higher importance to nodes that have a large number of neighbors. However, it does not account for the cascade effect resulting from small degree nodes connected to a few but influential nodes. + +\item \textbf{ Eigenvalue centrality} quantifies the influence of a node in a network by measuring the node's closeness to influential parts of a network. It combines the degree of a node with the degree of its neighbors. For a graph $\mathcal{G}$ with adjacency matrix $A$, the eigenvalue centrality $C_e$ of a node $v$ is calculated by\cite{net2}, +\begin{equation}\label{eigen} + C_e (v)= \alpha \sum_{u,v\in \mathcal{E}}C_e(u) \;\;\; , \;\;\; AC_e = \alpha^{-1}C_e +\end{equation} +where $C_e$ and $1/\alpha$ are the eigenvector and corresponding eigenvalue of $A$ respectively. + + +\end{itemize} + +\subsection{Noise Distribution}\label{robust-noise} + +We learn the parameters $(\mu_{f_i},\sigma_{f_i})$ of the feature distribution (mean and standard deviation), using the maximum likelihood estimation (MLE) \cite{prob}. +The likelihood function for MLE maximizes the likelihood of observing feature data $X_{f_i}$ given the distribution parameters $(\mu_{f_i},\sigma_{f_i})$, +\begin{equation} +\mathcal{L} = P(X_{f_i} |\mu_{f_i},\sigma_{f_i} ) \;\;\; , \;\;\; \mathcal{L} = \mathcal{N}(X_{f_i} |\mu_{f_i},\sigma_{f_i} ) +\end{equation} +We estimate the parameters by maximizing the log of the likelihood function. Since we assume the data samples are independent, the log allows us to take a summation over the samples indexed by $v$ and represented by $x_i^v$, + +\begin{equation} + \hat{\mu}_{f_i} = \underset{\mu_{f_i}}{\mathrm{argmax}}\, \sum_{v} log \big( \mathcal{N}(x_i^v |\mu_{f_i},\sigma_{f_i} ) \big) \;\;\; , \;\;\; \hat{\sigma}_{f_i} = \underset{\sigma_{f_i}}{\mathrm{argmax}}\, \sum_{v} log \big( \mathcal{N}(x_i^v |\mu_{f_i},\sigma_{f_i} )\big) +\end{equation} +Plugging in the expression for Gaussian distribution, expanding the terms as summation followed by the second derivative equated to zero, provides the following expressions, +\begin{equation} + \hat{\mu}_{f_i} = \frac{1}{V} \sum_{v} x_i^v \;\;\; , \;\;\; \hat{\sigma}_{f_i} = \frac{1}{V} \sum_{v} (x_i^v - \hat{\mu}_{f_i}) +\end{equation} + + + +where $V$ represents the total number of samples. +Finally, we expand the distribution slightly by increasing the variance by a factor of three for generalization. + +\section{Additional Results}\label{add} + +\subsection{Statistical Analysis} +The p-values obtained from the Tukey HSD test for pairwise comparison between proposed and benchmark models are reported in Table. \ref{tab:tukey} + + +\begin{table}[] +\centering +\resizebox{0.5\columnwidth}{!}{\begin{tabular}{|c|c|c|c|} +\hline +\textbf{Proposed Model} & \textbf{Benchmark} & \textbf{p-value (Random Split)} & \textbf{p-value (LOCO)} \\ \hline +GCN-LSTM & LSTM & 0.0001 & 0.0000 \\ \hline +GCN-LSTM & CONV-LSTM & 0.0002 & 0.001 \\ \hline +GAN-LSTM & LSTM & 0.0000 & 0.0000 \\ \hline +GAN-LSTM & CONV-LSTM & 0.0000 & 0.0000 \\ \hline +\end{tabular}} +\caption{Tukey-HSD adjusted p-values.} +\label{tab:tukey} +\end{table} + + + + + +\subsection{Impact of Network Characteristics} + +We conducted another experiment to assess the impact of network size on the model's performance with an extended sequence length of $L=7$ days, and the results are shown in Fig. \ref{fig:extra_net}(a). Similar to previous findings, increasing the network size enhances performance until it plateaus at approximately $N=15$ for the proposed models. GCN experiences an accuracy dip after $N=15$, while GAN and CONV-LSTM continue to improve. We also repeated the experiment with varying sequence lengths while keeping the network size fixed at 15, as depicted in Fig. \ref{fig:extra_net}(b). Graph-based models perform exceptionally well at $L=3$, but longer memory adversely affects their performance. Conversely, benchmark models initially dip or show a marginal change in performance, but they improve significantly when memory extends beyond 5 days. + + +\begin{figure} + \centering + \subfloat[][\centering Impact of graph size with long sequence]{{\includegraphics[width=5.5cm]{figures/N_fixedL2.jpg} }}\qquad + \subfloat[][\centering Impact of temporal memory with large network]{{\includegraphics[width=5.5cm]{figures/L_fixedN2.jpg} }}\caption{Impact of different parameters on performance}\label{fig:extra_net}\end{figure} + +\subsection{Robustness Analysis} +\begin{figure}[h] + \centering + \subfloat[][\centering Robustness against percentage of perturbed features in 5 users]{{\includegraphics[width=0.47\textwidth]{figures/missing_n_5.png} }}\qquad + \subfloat[][\centering Robustness against percentage of perturbed features in 13 users]{{\includegraphics[width=0.47\textwidth]{figures/missing_n_13.png} }}\caption{Impact on MAPE by the varying quantity of perturbations in graphs of size 15 and sequence length of 3 days$.$}\label{fig:add_nf1}\end{figure} + +\subsubsection{Perturbations in temporal domain}\label{pertT} + + + +In this section, we present results for perturbations in a different part of the sequence. Please note that in all previous experiments, the sequence index was kept fixed and perturbations were only added to data from the current day while the prediction was being made for the next day. The feature matrix $S_k^L$, fed to LSTM module consists of past $L$ days of data, + +\begin{equation} + S_k^L =\big[x[k-L]] ,x[k-L+1],...x[k]\big] +\end{equation} + +We randomly choose a subset of indices $\tilde{I}_L$ of size $\tilde{L}