reference
stringlengths
376
444k
target
stringlengths
31
68k
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> This study investigated qualitatively the experiences of men who took part in a 10 week integrated exercise/psychosocial mental health promotion programme, “Back of the Net” (BTN). 15 participants who completed the BTN programme were recruited to participate in either a focus group discussion (N = 9) or individual interview (N = 6). A thematic analytic approach was employed to identify key themes in the data. Results indicated that participants felt that football was a positive means of engaging men in a mental health promotion program. Perceived benefits experienced included perceptions of mastery, social support, positive affect and changes in daily behaviour. The findings support the value of developing gender specific mental health interventions to both access and engage young men. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> OBJECTIVE ::: To evaluate the efficacy of a home-based exercise programme added to usual medical care for the treatment of depression. ::: ::: ::: DESIGN ::: Prospective, two group parallel, randomised controlled study. ::: ::: ::: SETTING ::: Community-based. ::: ::: ::: PATIENTS ::: 200 adults aged 50 years or older deemed to be currently suffering from a clinical depressive illness and under the care of a general practitioner. ::: ::: ::: INTERVENTIONS ::: Participants were randomly allocated to either usual medical care alone (control) or usual medical care plus physical activity (intervention). The intervention consisted of a 12-week home-based programme to promote physical activity at a level that meets recently published guidelines for exercise in people aged 65 years or over. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: Severity of depression was measured with the structured interview guide for the Montgomery-Asberg Depression Rating Scale (SIGMA), and depression status was assessed with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). ::: ::: ::: RESULTS ::: Remission of depressive illness was similar in both the usual care (59%) and exercise groups (63%; OR = 1.18, 95% CI 0.61 to 2.30) at the end of the 12-week intervention, and again at the 52-week follow-up (67% vs 68%) (OR=1.07, 95% CI 0.56 to 2.02). There was no change in objective measures of fitness over the 12-week intervention among the exercise group. ::: ::: ::: CONCLUSIONS ::: This home-based physical activity intervention failed to enhance fitness and did not ameliorate depressive symptoms in older adults, possibly due to a lack of ongoing supervision to ensure compliance and optimal engagement. <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> Abstract We developed a physical exercise intervention aimed at improving multiple determinants of physical performance in severe mental illness. A sample of 12 (9M, 3F) overweight or obese community-dwelling patients with schizophrenia ( n =9) and bipolar disorder ( n =3) completed an eight-week, high-velocity circuit resistance training, performed twice a week on the computerized Keiser pneumatic exercise machines, including extensive pre/post physical performance testing. Participants showed significant increases in strength and power in all major muscle groups. There were significant positive cognitive changes, objectively measured with the Brief Assessment of Cognition Scale: improvement in composite scores, processing speed and symbol coding. Calgary Depression Scale for Schizophrenia and Positive and Negative Syndrome Scale total scores improved significantly. There were large gains in neuromuscular performance that have functional implications. The cognitive domains that showed the greatest improvements (memory and processing speed) are most highly predictive of disability in schizophrenia. Moreover, the improvements seen in depression suggest this type of exercise intervention may be a valuable add-on therapy for bipolar depression. <s> BIB003 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> The health benefits of exercise are well established, yet individuals with serious mental illness (SMI) have a shorter life expectancy due in large part to physical health complications associated with poor diet and lack of exercise. There is a paucity of research examining exercise in this population with the majority of studies having examined interventions with limited feasibility and sustainability. Before developing an intervention, a thorough exploration of client and clinician perspectives on exercise and its associated barriers is warranted. Twelve clients and fourteen clinicians participated in focus groups aimed at examining exercise, barriers, incentives, and attitudes about walking groups. Results indicated that clients and clinicians identified walking as the primary form of exercise, yet barriers impeded consistent participation. Distinct themes arose between groups; however, both clients and clinicians reported interest in a combination group/pedometer based walking program for individuals with SMI. Future research should consider examining walking programs for this population. <s> BIB004 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> BackgroundIndividuals with a severe mental illness (SMI) are at least two times more likely to suffer from metabolic co-morbidities, leading to excessive and premature deaths. In spite of the many physical and mental health benefits of physical activity (PA), individuals with SMI are less physically active and more sedentary than the general population. One key component towards increasing the acceptability, adoption, and long-term adherence to PA is to understand, tailor and incorporate the PA preferences of individuals. Therefore, the objective of this study was to determine if there are differences in PA preferences among individuals diagnosed with different psychiatric disorders, in particular schizophrenia or bipolar disorder (BD), and to identify PA design features that participants would prefer.MethodsParticipants with schizophrenia (n = 113) or BD (n = 60) completed a survey assessing their PA preferences.ResultsThere were no statistical between-group differences on any preferred PA program design feature between those diagnosed with schizophrenia or BD. As such, participants with either diagnosis were collapsed into one group in order to report PA preferences. Walking (59.5 %) at moderate intensity (61.3 %) was the most popular activity and participants were receptive to using self-monitoring tools (59.0 %). Participants were also interested in incorporating strength and resistance training (58.5 %) into their PA program and preferred some level of regular contact with a fitness specialist (66.0 %).ConclusionsThese findings can be used to tailor a physical activity intervention for adults with schizophrenia or BD. Since participants with schizophrenia or BD do not differ in PA program preferences, the preferred features may have broad applicability for individuals with any SMI. <s> BIB005 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> Previous qualitative studies have found that exercise may facilitate symptomatic and functional recovery in people with long-term schizophrenia. This study examined the perceived effects of exercise as experienced by people in the early stages of psychosis, and explored which aspects of an exercise intervention facilitated or hindered their engagement. Nineteen semi-structured interviews were conducted with early intervention service users who had participated in a 10-week exercise intervention. Interviews discussed people’s incentives and barriers to exercise, short- and long-term effects, and opinions on optimal interventions. A thematic analysis was applied to determine the prevailing themes. The intervention was perceived as beneficial and engaging for participants. The main themes were (a) exercise alleviating psychiatric symptoms, (b) improved self-perceptions following exercise, and (c) factors determining exercise participation, with three respective sub-themes for each. Participants explained how exercise had improved their mental health, improved their confidence and given them a sense of achievement. Autonomy and social support were identified as critical factors for effectively engaging people with first-episode psychosis in moderate-to-vigorous exercise. Implementing such programs in early intervention services may lead to better physical health, symptom management and social functioning among service users. Current Controlled Trials ISRCTN09150095. Registered 10 December 2013. <s> BIB006 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Settings of Studies <s> Abstract Introduction Measurement of symptoms domains and their response to treatment in relative isolation from diagnosed mental disorders has gained new urgency, as reflected by the National Institute of Mental Health's introduction of the Research Domain Criteria (RDoC). The Snaith Hamilton Pleasure Scale (SHAPS) and the Motivation and Energy Inventory (MEI) are two scales measuring positive valence symptoms. We evaluated the effect of exercise on positive valence symptoms of Major Depressive Disorder (MDD). Methods Subjects in the Treatment with Exercise Augmentation for Depression (TREAD) study completed self-reported SHAPS and MEI during 12 weeks of exercise augmentation for depression. We evaluated the effect of exercise on SHAPS and MEI scores, and whether the changes were related to overall MDD severity measured with the Quick Inventory of Depression Symptomatology (QIDS). Results SHAPS and MEI scores significantly improved with exercise. MEI score change had larger effect size and greater correlation with change in QIDS score. MEI also showed significant moderator and mediator effects of exercise in MDD. Limitations Generalizability to other treatments is limited. This study lacked other bio-behavioral markers that would enhance understanding of the relationship of RDoC and the measures used. Conclusions Positive valence symptoms improve with exercise treatment for depression, and this change correlates well with overall outcome. Motivation and energy may be more clinically relevant to outcome of exercise treatment than anhedonia. <s> BIB007
The settings took place in multiple countries around the world. Studies were based in Norway (Bonsaksen & Lerdal, 2012) , United States (Brown et al., 2015; Powers et al., 2015; BIB003 BIB007 , Canada BIB005 , Australia BIB002 , Germany (Oretel-Knochel et al., 2014) , Ireland (McArdle et al., 2012) , and the United Kingdom BIB006 . Studies were conducted in different facilities. Some settings were inpatient based on a psychiatric unit (Bonsaksen & Lerdal, 2012; Oretel-Knochel et al., 2014) , outpatient BIB004 BIB006 BIB005 BIB007 , physical training facilities BIB001 BIB003 , home-based BIB002 , and other settings otherwise not noted or vague in description (Powers et al., 2015) . While the samples were not vague, there were a variety of sample sizes and diagnoses that were included.
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> This study investigated qualitatively the experiences of men who took part in a 10 week integrated exercise/psychosocial mental health promotion programme, “Back of the Net” (BTN). 15 participants who completed the BTN programme were recruited to participate in either a focus group discussion (N = 9) or individual interview (N = 6). A thematic analytic approach was employed to identify key themes in the data. Results indicated that participants felt that football was a positive means of engaging men in a mental health promotion program. Perceived benefits experienced included perceptions of mastery, social support, positive affect and changes in daily behaviour. The findings support the value of developing gender specific mental health interventions to both access and engage young men. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> OBJECTIVE ::: To evaluate the efficacy of a home-based exercise programme added to usual medical care for the treatment of depression. ::: ::: ::: DESIGN ::: Prospective, two group parallel, randomised controlled study. ::: ::: ::: SETTING ::: Community-based. ::: ::: ::: PATIENTS ::: 200 adults aged 50 years or older deemed to be currently suffering from a clinical depressive illness and under the care of a general practitioner. ::: ::: ::: INTERVENTIONS ::: Participants were randomly allocated to either usual medical care alone (control) or usual medical care plus physical activity (intervention). The intervention consisted of a 12-week home-based programme to promote physical activity at a level that meets recently published guidelines for exercise in people aged 65 years or over. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: Severity of depression was measured with the structured interview guide for the Montgomery-Asberg Depression Rating Scale (SIGMA), and depression status was assessed with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). ::: ::: ::: RESULTS ::: Remission of depressive illness was similar in both the usual care (59%) and exercise groups (63%; OR = 1.18, 95% CI 0.61 to 2.30) at the end of the 12-week intervention, and again at the 52-week follow-up (67% vs 68%) (OR=1.07, 95% CI 0.56 to 2.02). There was no change in objective measures of fitness over the 12-week intervention among the exercise group. ::: ::: ::: CONCLUSIONS ::: This home-based physical activity intervention failed to enhance fitness and did not ameliorate depressive symptoms in older adults, possibly due to a lack of ongoing supervision to ensure compliance and optimal engagement. <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> The health benefits of exercise are well established, yet individuals with serious mental illness (SMI) have a shorter life expectancy due in large part to physical health complications associated with poor diet and lack of exercise. There is a paucity of research examining exercise in this population with the majority of studies having examined interventions with limited feasibility and sustainability. Before developing an intervention, a thorough exploration of client and clinician perspectives on exercise and its associated barriers is warranted. Twelve clients and fourteen clinicians participated in focus groups aimed at examining exercise, barriers, incentives, and attitudes about walking groups. Results indicated that clients and clinicians identified walking as the primary form of exercise, yet barriers impeded consistent participation. Distinct themes arose between groups; however, both clients and clinicians reported interest in a combination group/pedometer based walking program for individuals with SMI. Future research should consider examining walking programs for this population. <s> BIB003 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> Abstract We developed a physical exercise intervention aimed at improving multiple determinants of physical performance in severe mental illness. A sample of 12 (9M, 3F) overweight or obese community-dwelling patients with schizophrenia ( n =9) and bipolar disorder ( n =3) completed an eight-week, high-velocity circuit resistance training, performed twice a week on the computerized Keiser pneumatic exercise machines, including extensive pre/post physical performance testing. Participants showed significant increases in strength and power in all major muscle groups. There were significant positive cognitive changes, objectively measured with the Brief Assessment of Cognition Scale: improvement in composite scores, processing speed and symbol coding. Calgary Depression Scale for Schizophrenia and Positive and Negative Syndrome Scale total scores improved significantly. There were large gains in neuromuscular performance that have functional implications. The cognitive domains that showed the greatest improvements (memory and processing speed) are most highly predictive of disability in schizophrenia. Moreover, the improvements seen in depression suggest this type of exercise intervention may be a valuable add-on therapy for bipolar depression. <s> BIB004 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> Previous qualitative studies have found that exercise may facilitate symptomatic and functional recovery in people with long-term schizophrenia. This study examined the perceived effects of exercise as experienced by people in the early stages of psychosis, and explored which aspects of an exercise intervention facilitated or hindered their engagement. Nineteen semi-structured interviews were conducted with early intervention service users who had participated in a 10-week exercise intervention. Interviews discussed people’s incentives and barriers to exercise, short- and long-term effects, and opinions on optimal interventions. A thematic analysis was applied to determine the prevailing themes. The intervention was perceived as beneficial and engaging for participants. The main themes were (a) exercise alleviating psychiatric symptoms, (b) improved self-perceptions following exercise, and (c) factors determining exercise participation, with three respective sub-themes for each. Participants explained how exercise had improved their mental health, improved their confidence and given them a sense of achievement. Autonomy and social support were identified as critical factors for effectively engaging people with first-episode psychosis in moderate-to-vigorous exercise. Implementing such programs in early intervention services may lead to better physical health, symptom management and social functioning among service users. Current Controlled Trials ISRCTN09150095. Registered 10 December 2013. <s> BIB005 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Sample Sizes of Studies <s> Abstract Introduction Measurement of symptoms domains and their response to treatment in relative isolation from diagnosed mental disorders has gained new urgency, as reflected by the National Institute of Mental Health's introduction of the Research Domain Criteria (RDoC). The Snaith Hamilton Pleasure Scale (SHAPS) and the Motivation and Energy Inventory (MEI) are two scales measuring positive valence symptoms. We evaluated the effect of exercise on positive valence symptoms of Major Depressive Disorder (MDD). Methods Subjects in the Treatment with Exercise Augmentation for Depression (TREAD) study completed self-reported SHAPS and MEI during 12 weeks of exercise augmentation for depression. We evaluated the effect of exercise on SHAPS and MEI scores, and whether the changes were related to overall MDD severity measured with the Quick Inventory of Depression Symptomatology (QIDS). Results SHAPS and MEI scores significantly improved with exercise. MEI score change had larger effect size and greater correlation with change in QIDS score. MEI also showed significant moderator and mediator effects of exercise in MDD. Limitations Generalizability to other treatments is limited. This study lacked other bio-behavioral markers that would enhance understanding of the relationship of RDoC and the measures used. Conclusions Positive valence symptoms improve with exercise treatment for depression, and this change correlates well with overall outcome. Motivation and energy may be more clinically relevant to outcome of exercise treatment than anhedonia. <s> BIB006
Studies included sample sizes from (N = 9) the smallest sample (Powers et al., 2015) to (N = 200) the largest sample of subjects BIB002 ) giving a mean sample size for the review of 64.2. Some studies had an even representation of male and female subjects while others had skewed numbers of males BIB001 and females (Powers et al., 2015) . Other sample focus was at the client and clinicians BIB003 . Mental illness diagnoses were a focus of the sample groups for the reviewed studies, which included depression or major depressive disorder (MDD), anxiety, schizophrenia, bipolar disorder, and posttraumatic stress disorder (PTSD). Six of the articles were related to depression and MDD (Bonsaksen & Lerdal, 2012; BIB003 BIB001 Oretel-Knochel et al., 2014; BIB002 BIB006 and one PTSD (Powers et al., 2015) . Exercise as an intervention was found to be beneficial in six studies for treating mental illnesses BIB005 BIB001 Oretel-Knochel et al., 2014; Powers et al., 2015; BIB004 BIB006 along with other related subthemes.
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> This study investigated qualitatively the experiences of men who took part in a 10 week integrated exercise/psychosocial mental health promotion programme, “Back of the Net” (BTN). 15 participants who completed the BTN programme were recruited to participate in either a focus group discussion (N = 9) or individual interview (N = 6). A thematic analytic approach was employed to identify key themes in the data. Results indicated that participants felt that football was a positive means of engaging men in a mental health promotion program. Perceived benefits experienced included perceptions of mastery, social support, positive affect and changes in daily behaviour. The findings support the value of developing gender specific mental health interventions to both access and engage young men. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> OBJECTIVE ::: To evaluate the efficacy of a home-based exercise programme added to usual medical care for the treatment of depression. ::: ::: ::: DESIGN ::: Prospective, two group parallel, randomised controlled study. ::: ::: ::: SETTING ::: Community-based. ::: ::: ::: PATIENTS ::: 200 adults aged 50 years or older deemed to be currently suffering from a clinical depressive illness and under the care of a general practitioner. ::: ::: ::: INTERVENTIONS ::: Participants were randomly allocated to either usual medical care alone (control) or usual medical care plus physical activity (intervention). The intervention consisted of a 12-week home-based programme to promote physical activity at a level that meets recently published guidelines for exercise in people aged 65 years or over. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: Severity of depression was measured with the structured interview guide for the Montgomery-Asberg Depression Rating Scale (SIGMA), and depression status was assessed with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). ::: ::: ::: RESULTS ::: Remission of depressive illness was similar in both the usual care (59%) and exercise groups (63%; OR = 1.18, 95% CI 0.61 to 2.30) at the end of the 12-week intervention, and again at the 52-week follow-up (67% vs 68%) (OR=1.07, 95% CI 0.56 to 2.02). There was no change in objective measures of fitness over the 12-week intervention among the exercise group. ::: ::: ::: CONCLUSIONS ::: This home-based physical activity intervention failed to enhance fitness and did not ameliorate depressive symptoms in older adults, possibly due to a lack of ongoing supervision to ensure compliance and optimal engagement. <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> The health benefits of exercise are well established, yet individuals with serious mental illness (SMI) have a shorter life expectancy due in large part to physical health complications associated with poor diet and lack of exercise. There is a paucity of research examining exercise in this population with the majority of studies having examined interventions with limited feasibility and sustainability. Before developing an intervention, a thorough exploration of client and clinician perspectives on exercise and its associated barriers is warranted. Twelve clients and fourteen clinicians participated in focus groups aimed at examining exercise, barriers, incentives, and attitudes about walking groups. Results indicated that clients and clinicians identified walking as the primary form of exercise, yet barriers impeded consistent participation. Distinct themes arose between groups; however, both clients and clinicians reported interest in a combination group/pedometer based walking program for individuals with SMI. Future research should consider examining walking programs for this population. <s> BIB003 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> Abstract We developed a physical exercise intervention aimed at improving multiple determinants of physical performance in severe mental illness. A sample of 12 (9M, 3F) overweight or obese community-dwelling patients with schizophrenia ( n =9) and bipolar disorder ( n =3) completed an eight-week, high-velocity circuit resistance training, performed twice a week on the computerized Keiser pneumatic exercise machines, including extensive pre/post physical performance testing. Participants showed significant increases in strength and power in all major muscle groups. There were significant positive cognitive changes, objectively measured with the Brief Assessment of Cognition Scale: improvement in composite scores, processing speed and symbol coding. Calgary Depression Scale for Schizophrenia and Positive and Negative Syndrome Scale total scores improved significantly. There were large gains in neuromuscular performance that have functional implications. The cognitive domains that showed the greatest improvements (memory and processing speed) are most highly predictive of disability in schizophrenia. Moreover, the improvements seen in depression suggest this type of exercise intervention may be a valuable add-on therapy for bipolar depression. <s> BIB004 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> BackgroundIndividuals with a severe mental illness (SMI) are at least two times more likely to suffer from metabolic co-morbidities, leading to excessive and premature deaths. In spite of the many physical and mental health benefits of physical activity (PA), individuals with SMI are less physically active and more sedentary than the general population. One key component towards increasing the acceptability, adoption, and long-term adherence to PA is to understand, tailor and incorporate the PA preferences of individuals. Therefore, the objective of this study was to determine if there are differences in PA preferences among individuals diagnosed with different psychiatric disorders, in particular schizophrenia or bipolar disorder (BD), and to identify PA design features that participants would prefer.MethodsParticipants with schizophrenia (n = 113) or BD (n = 60) completed a survey assessing their PA preferences.ResultsThere were no statistical between-group differences on any preferred PA program design feature between those diagnosed with schizophrenia or BD. As such, participants with either diagnosis were collapsed into one group in order to report PA preferences. Walking (59.5 %) at moderate intensity (61.3 %) was the most popular activity and participants were receptive to using self-monitoring tools (59.0 %). Participants were also interested in incorporating strength and resistance training (58.5 %) into their PA program and preferred some level of regular contact with a fitness specialist (66.0 %).ConclusionsThese findings can be used to tailor a physical activity intervention for adults with schizophrenia or BD. Since participants with schizophrenia or BD do not differ in PA program preferences, the preferred features may have broad applicability for individuals with any SMI. <s> BIB005 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Other Themes of Studies <s> Previous qualitative studies have found that exercise may facilitate symptomatic and functional recovery in people with long-term schizophrenia. This study examined the perceived effects of exercise as experienced by people in the early stages of psychosis, and explored which aspects of an exercise intervention facilitated or hindered their engagement. Nineteen semi-structured interviews were conducted with early intervention service users who had participated in a 10-week exercise intervention. Interviews discussed people’s incentives and barriers to exercise, short- and long-term effects, and opinions on optimal interventions. A thematic analysis was applied to determine the prevailing themes. The intervention was perceived as beneficial and engaging for participants. The main themes were (a) exercise alleviating psychiatric symptoms, (b) improved self-perceptions following exercise, and (c) factors determining exercise participation, with three respective sub-themes for each. Participants explained how exercise had improved their mental health, improved their confidence and given them a sense of achievement. Autonomy and social support were identified as critical factors for effectively engaging people with first-episode psychosis in moderate-to-vigorous exercise. Implementing such programs in early intervention services may lead to better physical health, symptom management and social functioning among service users. Current Controlled Trials ISRCTN09150095. Registered 10 December 2013. <s> BIB006
Other major themes found in the reviews were preference such as exercise choice, exercise structure, and exercise length BIB003 BIB001 BIB005 , which helped improve compliance in one study BIB001 . Another theme found is symptom severity (Bonsaksen & Lerdal, 2012; BIB006 BIB001 BIB002 Powers et al., 2015) and exercise was effective in reducing mental illness symptom severity in three studies BIB006 BIB001 Powers et al., 2015) . Cognitive ability was a predominant sub-theme that was found as well and effectively treated with exercise (Oretel-Knochel et al., 2014; BIB004 .
Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> This study investigated qualitatively the experiences of men who took part in a 10 week integrated exercise/psychosocial mental health promotion programme, “Back of the Net” (BTN). 15 participants who completed the BTN programme were recruited to participate in either a focus group discussion (N = 9) or individual interview (N = 6). A thematic analytic approach was employed to identify key themes in the data. Results indicated that participants felt that football was a positive means of engaging men in a mental health promotion program. Perceived benefits experienced included perceptions of mastery, social support, positive affect and changes in daily behaviour. The findings support the value of developing gender specific mental health interventions to both access and engage young men. <s> BIB001 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> OBJECTIVE ::: To evaluate the efficacy of a home-based exercise programme added to usual medical care for the treatment of depression. ::: ::: ::: DESIGN ::: Prospective, two group parallel, randomised controlled study. ::: ::: ::: SETTING ::: Community-based. ::: ::: ::: PATIENTS ::: 200 adults aged 50 years or older deemed to be currently suffering from a clinical depressive illness and under the care of a general practitioner. ::: ::: ::: INTERVENTIONS ::: Participants were randomly allocated to either usual medical care alone (control) or usual medical care plus physical activity (intervention). The intervention consisted of a 12-week home-based programme to promote physical activity at a level that meets recently published guidelines for exercise in people aged 65 years or over. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: Severity of depression was measured with the structured interview guide for the Montgomery-Asberg Depression Rating Scale (SIGMA), and depression status was assessed with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I). ::: ::: ::: RESULTS ::: Remission of depressive illness was similar in both the usual care (59%) and exercise groups (63%; OR = 1.18, 95% CI 0.61 to 2.30) at the end of the 12-week intervention, and again at the 52-week follow-up (67% vs 68%) (OR=1.07, 95% CI 0.56 to 2.02). There was no change in objective measures of fitness over the 12-week intervention among the exercise group. ::: ::: ::: CONCLUSIONS ::: This home-based physical activity intervention failed to enhance fitness and did not ameliorate depressive symptoms in older adults, possibly due to a lack of ongoing supervision to ensure compliance and optimal engagement. <s> BIB002 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> The health benefits of exercise are well established, yet individuals with serious mental illness (SMI) have a shorter life expectancy due in large part to physical health complications associated with poor diet and lack of exercise. There is a paucity of research examining exercise in this population with the majority of studies having examined interventions with limited feasibility and sustainability. Before developing an intervention, a thorough exploration of client and clinician perspectives on exercise and its associated barriers is warranted. Twelve clients and fourteen clinicians participated in focus groups aimed at examining exercise, barriers, incentives, and attitudes about walking groups. Results indicated that clients and clinicians identified walking as the primary form of exercise, yet barriers impeded consistent participation. Distinct themes arose between groups; however, both clients and clinicians reported interest in a combination group/pedometer based walking program for individuals with SMI. Future research should consider examining walking programs for this population. <s> BIB003 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> Abstract We developed a physical exercise intervention aimed at improving multiple determinants of physical performance in severe mental illness. A sample of 12 (9M, 3F) overweight or obese community-dwelling patients with schizophrenia ( n =9) and bipolar disorder ( n =3) completed an eight-week, high-velocity circuit resistance training, performed twice a week on the computerized Keiser pneumatic exercise machines, including extensive pre/post physical performance testing. Participants showed significant increases in strength and power in all major muscle groups. There were significant positive cognitive changes, objectively measured with the Brief Assessment of Cognition Scale: improvement in composite scores, processing speed and symbol coding. Calgary Depression Scale for Schizophrenia and Positive and Negative Syndrome Scale total scores improved significantly. There were large gains in neuromuscular performance that have functional implications. The cognitive domains that showed the greatest improvements (memory and processing speed) are most highly predictive of disability in schizophrenia. Moreover, the improvements seen in depression suggest this type of exercise intervention may be a valuable add-on therapy for bipolar depression. <s> BIB004 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> Previous qualitative studies have found that exercise may facilitate symptomatic and functional recovery in people with long-term schizophrenia. This study examined the perceived effects of exercise as experienced by people in the early stages of psychosis, and explored which aspects of an exercise intervention facilitated or hindered their engagement. Nineteen semi-structured interviews were conducted with early intervention service users who had participated in a 10-week exercise intervention. Interviews discussed people’s incentives and barriers to exercise, short- and long-term effects, and opinions on optimal interventions. A thematic analysis was applied to determine the prevailing themes. The intervention was perceived as beneficial and engaging for participants. The main themes were (a) exercise alleviating psychiatric symptoms, (b) improved self-perceptions following exercise, and (c) factors determining exercise participation, with three respective sub-themes for each. Participants explained how exercise had improved their mental health, improved their confidence and given them a sense of achievement. Autonomy and social support were identified as critical factors for effectively engaging people with first-episode psychosis in moderate-to-vigorous exercise. Implementing such programs in early intervention services may lead to better physical health, symptom management and social functioning among service users. Current Controlled Trials ISRCTN09150095. Registered 10 December 2013. <s> BIB005 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> BackgroundIndividuals with a severe mental illness (SMI) are at least two times more likely to suffer from metabolic co-morbidities, leading to excessive and premature deaths. In spite of the many physical and mental health benefits of physical activity (PA), individuals with SMI are less physically active and more sedentary than the general population. One key component towards increasing the acceptability, adoption, and long-term adherence to PA is to understand, tailor and incorporate the PA preferences of individuals. Therefore, the objective of this study was to determine if there are differences in PA preferences among individuals diagnosed with different psychiatric disorders, in particular schizophrenia or bipolar disorder (BD), and to identify PA design features that participants would prefer.MethodsParticipants with schizophrenia (n = 113) or BD (n = 60) completed a survey assessing their PA preferences.ResultsThere were no statistical between-group differences on any preferred PA program design feature between those diagnosed with schizophrenia or BD. As such, participants with either diagnosis were collapsed into one group in order to report PA preferences. Walking (59.5 %) at moderate intensity (61.3 %) was the most popular activity and participants were receptive to using self-monitoring tools (59.0 %). Participants were also interested in incorporating strength and resistance training (58.5 %) into their PA program and preferred some level of regular contact with a fitness specialist (66.0 %).ConclusionsThese findings can be used to tailor a physical activity intervention for adults with schizophrenia or BD. Since participants with schizophrenia or BD do not differ in PA program preferences, the preferred features may have broad applicability for individuals with any SMI. <s> BIB006 </s> Effects of Aerobic Exercise on Patients with Mental Illness in a Veterans Inpatient Psychiatric Unit: A Review of the Literature <s> Measurement Tools Used in Studies <s> Abstract Introduction Measurement of symptoms domains and their response to treatment in relative isolation from diagnosed mental disorders has gained new urgency, as reflected by the National Institute of Mental Health's introduction of the Research Domain Criteria (RDoC). The Snaith Hamilton Pleasure Scale (SHAPS) and the Motivation and Energy Inventory (MEI) are two scales measuring positive valence symptoms. We evaluated the effect of exercise on positive valence symptoms of Major Depressive Disorder (MDD). Methods Subjects in the Treatment with Exercise Augmentation for Depression (TREAD) study completed self-reported SHAPS and MEI during 12 weeks of exercise augmentation for depression. We evaluated the effect of exercise on SHAPS and MEI scores, and whether the changes were related to overall MDD severity measured with the Quick Inventory of Depression Symptomatology (QIDS). Results SHAPS and MEI scores significantly improved with exercise. MEI score change had larger effect size and greater correlation with change in QIDS score. MEI also showed significant moderator and mediator effects of exercise in MDD. Limitations Generalizability to other treatments is limited. This study lacked other bio-behavioral markers that would enhance understanding of the relationship of RDoC and the measures used. Conclusions Positive valence symptoms improve with exercise treatment for depression, and this change correlates well with overall outcome. Motivation and energy may be more clinically relevant to outcome of exercise treatment than anhedonia. <s> BIB007
Measurements of major variables were completed with some strategies that included multiple measurement tools. The Diagnostic and Statistical Manual is a tool that was used to measure the severity of depression BIB002 Measures of cognitive training and performance were captured by using the brief assessment of cognition in schizophrenia (BACS), which would also be used to help measure items such as symbol coding (Oertel-Knochel et al., 2014) . There were many other forms of measure and measurement tools found in the ROL that are listed in Table 2 . Mental illness is a debilitating diagnosis that effects much more than the person's psyche but also the person's social abilities, loss of autonomy, and job-related problems (Oretel-Knochel et al., 2014) . Exercise was found to be a very effective activity that positively influenced participants' mental health among those with the diagnoses of depression (Bonsaksen & Lerdal, 2012; BIB003 BIB001 Oretel-Knochel., 2014; BIB007 , anxiety (Bonsaksen & Lerdal, 2012; Oretel-Knochel., 2014) , schizophrenia disorders BIB003 BIB005 Oretel-Knochel., 2014; BIB004 BIB006 , Bipolar (Browne et al., 2015; BIB004 BIB006 , and PTSD (Powers et al., 2015) . There were significant improvements in subjects who suffer from schizophrenia disorders BIB005 Oretel-Knochel., 2014; BIB004 BIB006 . Positive (hallucinations and paranoia) and negative symptoms (amotivation and anhedonia) in schizophrenic subjects were shown to have symptom relief related to the intervention of exercise BIB005 . Improvements in speed processing, working memory, verbal learning were found as a result of physical exercise and or relaxation therapy (Oertel-Knochel et al., 2014) . High-velocity circuit resistance training (HVCR) can be beneficial for those who suffer from schizophrenia as well. HVCR improved the neuromuscular performance of subjects in an eight-week program. This program also had positive outcomes in cognitive scores including verbal memory, symbol coding, and verbal fluency addressed by the brief assessment of cognition BIB004 . Depression was found to be relieved by the introduction of exercise as an intervention BIB003 Oretel-Knochel., 2014; BIB007 and as an opportunity for socialization BIB001 . Two studies dedicated to depression found opposite results with the intervention of exercise. Both studies were not located in an inpatient unit. One study was focused on an age group that was 50 years of age and greater and solely based at the subject's home BIB002 . While the other study that had more positive results were using an age group ranging from 18-70 years of age, was well supervised in a training facility for the initial two weeks, and then continued the intervention at home BIB007 . Subjects did poorer with the intervention solely in their home and supervision from a booster telephone call. The lack of observation perhaps led to the lack of intensity of the exercise interventions, which may have had an impact on the results. However, exercise was shown to have improvements in motivation as well as depressive severity that led to better social motivation for subjects BIB007 . The exercise was also found to be beneficial in the enhancement of functioning and cognitive abilities (Oertel-Knochel et al., 2014; BIB004 . The highest results of cognitive improvements and positive mood alterations were found in studies that had the usage of a facility with a directed program (Oertel-Knochel et al., 2014; BIB004 . This would include the use of on staff researchers, coaches, and physiologists to help encourage subjects to exert proper energy and complete the entire workout for the day (Oertel-Knochel et al., 2014; BIB004 . Programs were organized at clean state-of-the-art athletic facilities, which the subjects stated helped them continue the program BIB001 BIB004 . The structured physical activities at a training facility BIB001 BIB004 BIB007 and in an inpatient setting (Oretel-Knochel et al., 2014) were found to be more effective than the unobserved home-based programs BIB002 . Outcomes of the research were positive for the promotion of exercise as an intervention for mental illness BIB005 BIB001 Oretel-Knochel et al., 2014; Powers et al., 2015; BIB004 BIB007 . The act of physical fitness showed a multi-layered positive effect for people who suffer from mental illness. There was an increased sense of well-being as well as an increased motivation for social interaction and a sense of excitement for the activity and increase in self-perception BIB005 BIB001 Oretel-Knochel et al., 2014; Powers et al., 2015; BIB004 BIB007 . Exercise, in particular, aerobic exercise, can influence the BDNF plasma levels in the body and have a profound positive impact on PTSD symptom management (Powers et al., 2015) . Other factors that influence a better quality of life were an increase in cognitive skills including working memory, verbal learning, and visual learning in subjects with schizophrenia (Oertel-Knochel et al., 2014) . Exercise lowered symptoms of depression and anxiety while increasing daily functioning abilities even in inpatient adults with severe mental illness (Bonsaksen & Lerdal, 2012) . Other important outcomes are subject perspectives of the intervention and their perceived barriers to exercise BIB003 . Some subjects found that exercise was very useful in relieving them of their daily stressors and looked forward to the activity BIB001 . While others in the early stages of psychosis found exercise therapeutic to their symptoms BIB005 .
English as an International Language: A Review of the Literature <s> Speaker and ELT practitioners' attitudes towards EIL <s> In this paper, the author argues that when teaching English as an international language, educators should recognize the value of including topics that deal with the local culture, support the selection of a methodology that is appropriate to the local educational context, and recognize the strengths of bilingual teachers of English. Based on the results of a questionnaire given to Chilean teachers of English, the author maintains that in Chile there is growing support for such practices and attitudes. Nowadays many countries where English is a required subject are confronting similar questions regarding the use of the local culture in ELT. <s> BIB001 </s> English as an International Language: A Review of the Literature <s> Speaker and ELT practitioners' attitudes towards EIL <s> TESOL Quarterly invites commentary on current trends or practices in the TESOL profession. It also welcomes responses or rebuttals to any articles or remarks published here in the Forum or elsewhere in the Quarterly. <s> BIB002
One of the main challenges the EIL model seems to face is the way it is perceived, not by students or authorities, but by teachers themselves. , in a research study of the attitudes of forty-seven pre-service teachers in Turkey, found that, although these pre-service teachers considered intelligibility to be the central goal of English learning, they reckoned that it is best to teach a normally recognized standard such as American or British English. They also favored the teaching of native prestigious varieties and disregarded non-native varieties as possible alternatives in the ELT classroom. They preferred instructional materials to be written in the American and British varieties. Finally, they evidenced a very low tolerance to errors, understood as forms deviant of the standard varieties. In a similar study, Fauzia and Qismullah (2009) collected data from ten informants from Asia, six of which were English teachers, and found that the attitudes of most participants towards their own accents in English were favorable. However, when asked what varieties of English they liked the most, only one answered that her own accent was her favorite. The others responded they were keener on the British and American varieties. When asked the varieties of English that they thought should be taught, informants responded 'Standard English' because they considered it to be original and correct English. This is very small sample to be considered representative. Still, it is somehow intriguing that even though speakers are aware of and comfortable with their own accents, they still champion the teaching of standard varieties. This double-standard approach is what Jenkins questions (2000, p. 160) when she asserts that 'There really is no justification for doggedly persisting in referring to an item as 'an error' if the vast majority of the world's English speakers produce and understand it'. In the case of this study it seems clear that the accents of the informants are probably very common in the regions of Asia where they come from, but still, they look up to prestige varieties as a desired outcome, although they themselves are examples of the high level of difficulty of attaining that goal. On the one hand teachers accept that effective communication and intelligibility are the main goals when conversing in English, yet on the other, standard varieties are kept at the core of English teaching, dooming learners many times to the predetermined failure of not achieving the targeted native-like proficiency explicit in the EFL model. In a similar fashion, BIB002 interviewed eighteen Non-Native-Teachers-of-English (NNTE's) about the way they perceived their own English in relation to the standard. She found that informants deemed Standard English as good, correct, proficient and competent. On the other hand, a non-native accent was mostly described as the opposite: not good, incorrect, strong and deficient. BIB001 found similar results studying the attitudes of Chilean teachers towards EIL. In this sense, Jenkins (2007, p. 141) continues to elaborate and emphasizes the difficulty teachers have to '…disassociate notions of correctness from 'nativeness' and to assess intelligibility and acceptability from anything but a NS (Native Speaker) standpoint…' In this respect, the identities of teachers are crucial. Teachers, as individuals who have been engaged for years in the learning of a language, are somehow threatened by the fact that accomplishing the level of perfection they have long aimed at is no longer the only desirable goal. In the light of this reasoning, it is not surprising that language teachers seem reluctant to accept a model for English learning that overthrows linguistic 'perfection' as the center of language learning. What some teachers may fail to comprehend, however, is that the objectives for learning a language are diverse. This is what the EIL model brings to the table: the possibility of a more diverse and inclusive approach that provides learners with tools to cope with the communicative demands of the rapidly changing character of English in international settings.
Ambient Backscatter Communications: A Contemporary Survey <s> <s> This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> <s> Low-cost Radio Frequency Identification (RFID) tags affixed to consumer items as smart labels are emerging as one of the most pervasive computing technology in history. This can have huge security implications. The present article surveys the most important technical security challenges of RFID systems. We first provide a brief summary of the most relevant standards related to this technology. Next, we present an overview about the state of the art on RFID security, addressing both the functional aspects and the security risks and threats associated to its use. Finally, we analyze the main security solutions proposed until date. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> <s> Sensor collision (interference) is studied in a large network of low bit-rate sensors that communicate via backscatter, i.e. modulate the reflection of a common carrier transmitted by a central reader. Closed-form analysis is provided, quantifying sensor collision (interference) in high-density, backscatter sensor networks (BSN), as a function of number of tags and aggregate bandwidth. Analysis is applicable to a broad class of sensor subcarrier modulations, propagation environments and reader antenna directivity patterns. It is discovered that anti-collision performance in high-density backscatter sensor networks is feasible provided that appropriate modulation is used at each sensor. That is due to the round-trip nature of backscatter communication as well as the extended target range, which both impose stringent requirements on spectrum efficiency, not easily met by all modulations. Furthermore, aggregate bandwidth savings for given anti-collision performance are quantified, when simple division techniques on subcarrier (modulating) frequency and space (via moderately directive hub antenna) are combined. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> <s> Backscatter radio - wireless communication by modulating signals scattered from a transponder (RF tag) - is fundamentally different from conventional radio because it involves two distinct links: the power-up link for powering passive RF tags, and the backscatter link for describing backscatter communication. Because of severe power constraints on the RF tag, a thorough knowledge of the backscatter channel is necessary to maximize backscatter-radio and radio-frequency identification (RFID) system performance. This article presents four link budgets that account for the major propagation mechanisms of the backscatter channel, along with a detailed discussion of each. Use of the link budgets is demonstrated by a practical UHF RFID portal example. The benefits of future 5.8 GHz multi-antenna backscatter-radio systems are shown. An intuitive analogy for understanding the antenna polarization of RF tag systems is presented. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> <s> RFID technologies have revolutionized the asset tracking industry, with applications ranging from automated checkout to monitoring the medication intakes of elderlies. In all these applications, fast, and in some cases energy efficient, tag reading is desirable, especially with increasing tag numbers. In practice, tag reading protocols face many problems. A key one being tag collision, which occurs when multiple tags reply simultaneously to a reader. As a result, an RFID reader experiences low tag reading performance, and wastes valuable energy. Therefore, it is important that RFID application developers are aware of current tag reading protocols. To this end, this paper surveys, classifies, and compares state-of-the-art tag reading protocols. Moreover, it presents research directions for existing and future tag reading protocols. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> <s> Backscatter communication offers an ultra-low power alternative to active radios in urban sensing deployments - communication is powered by a reader, thereby making it virtually "free". While backscatter communication has largely been used for extremely small amounts of data transfer (e.g. a 12 byte EPC identifier from an RFID tag), sensors need to use backscatter for continuous and high-volume sensor data transfer. To address this need, we describe a novel link layer that exploits unique characteristics of backscatter communication to optimize throughput. Our system offers several optimizations including 1) understanding of multi-path self-interference characteristics and link metrics that capture these characteristics, 2) design of novel mobility-aware probing techniques that use backscatter link signatures to determine when to probe the channel, 3) bitrate selection algorithms that use link metrics to determine the optimal bitrate, and 4) channel selection mechanism that optimize throughput while remaining compliant within FCC regulations. Our results show upto 3x increase in goodput over other mechanisms across a wide range of channel conditions, scales, and mobility scenarios. <s> BIB006
a result, this technique has found many useful applications in practice such as radio-frequency identification (RFID), tracking devices, remote switches, medical telemetry, and low-cost sensor networks BIB003 , BIB004 . However, due to some limitations BIB001 BIB002 BIB005 , conventional backscatter communications cannot be widely implemented for data-intensive wireless communications systems BIB006 . First, traditional backscatter communications require backscatter transmitters to be placed near their RF sources, thereby limiting the device usage and the coverage area. Second, in conventional backscatter communications, the backscatter receiver and the RF source are located in the same device, i.e., the reader, which can cause the self-interference between receive and transmit antennas, thereby reducing the communication performance. Moreover, conventional backscatter communications systems operate passively, i.e., backscatter transmitters only transmit data when inquired by backscatter receivers. Thus, they are only adopted by some limited applications. Recently, ambient backscatter has been emerging as a promising technology for low-energy communication systems which can address effectively the aforementioned limitations in conventional backscatter communications systems. In ambient backscatter communications systems (ABCSs), backscatter devices can communicate with each other by utilizing surrounding signals broadcast from ambient RF sources, e.g., TV towers, FM towers, cellular base stations, and Wi-Fi access points (APs). In particular, in an ABCS, the backscatter transmitter can transmit data to the backscatter receiver by modulating and reflecting surrounding ambient signals. Hence, the communication in the ABCS does not require dedicated frequency spectrum which is scarce and expensive. Based on the received signals from the backscatter transmitter and the RF source or carrier emitter, the receiver then can decode and obtain useful information from the transmitter. By separating the carrier emitter and the backscatter receiver, the number of RF components is minimized at backscatter devices and the devices can operate actively, i.e., backscatter transmitters can transmit data without initiation from receivers when it harvests sufficient energy from the RF source. This capability allows the ABCSs to be adopted widely in many practical applications. Although ambient backscatter communication has a great potential for future low-energy communication systems, especially Internet-of-Things (IoT), they are still facing many challenges. In particular, unlike conventional backscatter communications systems, the transmission efficiency of an ABCS much depends on the ambient source such as the type, e.g., TV signal or Wi-Fi signal, RF source location, and environment, e.g., indoor or outdoor. Therefore, ABCSs have to be designed specifically for particular ambient sources. Furthermore, due to the dynamics of ambient signals, data transmission scheduling for backscatter devices to maximize the usability of ambient signals is another important protocol design issue. Additionally, using ambient signals from licensed sources, the communication protocols of ABCSs have to guarantee not to interfere the transmissions of the licensed users. Therefore, considerable research efforts have been reported to improve ABCSs in various aspects. This paper is the first to provide a comprehensive overview of the state-of-the-art research and technological developments on the architectures, protocols, and applications of emerging ABCSs. The key features and objectives of this paper are • To provide a fundamental background for general readers to understand basic concepts, operation methods and mechanisms, and applications of ABCSs, • To summarize advanced design techniques related to architectures, hardware designs, network protocols, standards, and solutions of ABCSs, and • To discuss challenges, open issues, and potential future research directions. The rest of this paper is organized as follows. Section II provides fundamental knowledge about modulated backscatter communications including operation mechanism, antenna design, channel coding, and modulation schemes. Sections III and IV describe general architectures of bistatic backscatter communication systems (BBCSs) and ABCSs, respectively. We also review many research works in the literature aiming to address various existing problems in ABCSs, e.g., network design, scheduling, power management, and multi-access. Additionally, some potential applications are discussed in Sections III and IV. Then, emerging backscatter communications systems are reviewed in Section V. Section VI discusses challenges and future directions of ABCSs. Finally, we summarize and conclude the paper in Section VII. The abbreviations in this article are summarized in Table I .
Ambient Backscatter Communications: A Contemporary Survey <s> A. Energy Harvesting for Green Communications Networks <s> Wireless power transfer (WPT) technologies have been widely used in many areas, e.g., the charging of electric toothbrush, mobile phones, and electric vehicles. This paper introduces fundamental principles of three WPT technologies, i.e., inductive coupling-based WPT, magnetic resonant coupling-based WPT, and electromagnetic radiation-based WPT, together with discussions of their strengths and weaknesses. Main research themes are then presented, i.e., improving the transmission efficiency and distance, and designing multiple transmitters/receivers. The state-of-the-art techniques are reviewed and categorised. Several WPT applications are described. Open research challenges are then presented with a brief discussion of potential roadmap. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Energy Harvesting for Green Communications Networks <s> Current wireless and cellular networks are destined to undergo a significant change in the transition to the next generation of network technology. The so called wireless powered communication network (WPCN) has been recently emerging as a promising candidate for achieving the target performance of future networks. According to this paradigm, nodes in a WPCN can be equipped with hardware capable of harvesting energy from wireless signals, that is, their battery can be ubiquitously replenished without physical connections. Recent technological advances in the field of wireless power harvesting and transfer are providing strong evidence of the feasibility of this vision, especially for low-power devices. The future deployment of WPCN is more and more concretely foreseen. The aim of this article is therefore to provide a comprehensive review of the basics and backgrounds of WPCN, current major developments, and open research issues. In particular, we first give an overview of WPCN and its structure. We then present three major advanced approaches whose adoption could increase the performance of future WPCN: backscatter communications with energy harvesting; duty-cycle based energy management; and transceiver design for self-sustainable communications. We discuss implementation perspectives and tools for WPCN. Finally, we outline open research problems for WPCN. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Energy Harvesting for Green Communications Networks <s> Energy efficiency will play a crucial role in future communication systems and has become a main design target for all 5G radio access networks. The high operational costs and impossibility of replacing or recharging wireless device batteries in multiple scenarios, such as wireless medical sensors inside the human body, call for a new technology by which wireless devices can harvest energy from the environment via capturing ambient RF signals. SWIPT has emerged as a powerful means to address this issue. In this article, we survey the current architectures and enabling technologies for SWIPT and identify technical challenges to implement SWIPT. Following an overview of enabling technologies for SWIPT and SWIPT-assisted wireless systems, we showcase a novel SWIPT-supported power allocation mechanism for D2D communications to illustrate the importance of the application of SWIPT. As an ending note, we point out some future research directions to encourage and motivate more research efforts on SWIPT. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Energy Harvesting for Green Communications Networks <s> Wireless energy harvesting (EH) is a promising solution to prolong lifetime of power-constrained networks such as military and sensor networks. The high sensitivity of energy transfer to signal decay due to path loss and fading, promotes multi-antenna techniques like beamforming as the candidate transmission scheme for EH networks. Exploiting beamforming in EH networks has gained overwhelming interest, and lot of literature has appeared recently regarding this topic. The objective of this paper is to point out the state-of-the-art research activity on beamforming implementation in EH wireless networks. We first review the basic concepts and architecture of EH wireless networks. In addition, we also discuss the effects of beamforming transmission scheme on system performance in EH wireless communication. Furthermore, we present a comprehensive survey of multi-antenna EH communications. We cover the supporting network architectures like broadcasting, relay, and cognitive radio networks with the various beamforming deployment within the network architecture. We classify the different beamforming approaches in each network topology according to its design objective such as increasing the throughput, enhancing the energy transfer efficiency, and minimizing the total transmit power, with paying special attention to exploiting the physical layer security. We also survey major advances as well as open issues, challenges, and future research directions in multi-antenna EH communications. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Energy Harvesting for Green Communications Networks <s> Initial efforts on wireless power transfer (WPT) have concentrated toward long-distance transmission and high power applications. Nonetheless, the lower achievable transmission efficiency and potential health concerns arising due to high power applications, have caused limitations in their further developments. Due to tremendous energy consumption growth with ever-increasing connected devices, alternative wireless information and power transfer techniques have been important not only for theoretical research but also for the operational costs saving and for the sustainable growth of wireless communications. In this regard, radio frequency energy harvesting (RF-EH) for a wireless communications system presents a new paradigm that allows wireless nodes to recharge their batteries from the RF signals instead of fixed power grids and the traditional energy sources. In this approach, the RF energy is harvested from ambient electromagnetic sources or from the sources that directionally transmit RF energy for EH purposes. Notable research activities and major advances have occurred over the last decade in this direction. Thus, this paper provides a comprehensive survey of the state-of-art techniques, based on advances and open issues presented by simultaneous wireless information and power transfer (SWIPT) and WPT assisted technologies. More specifically, in contrast to the existing works, this paper identifies and provides a detailed description of various potential emerging technologies for the fifth generation communications with SWIPT/WPT. Moreover, we provide some interesting research challenges and recommendations with the objective of stimulating future research in this emerging domain. <s> BIB005
Recently, energy harvesting (EH) technique has gained a lot of attention from both academia and industry due to its promising features for green communications networks, e.g., WSNs and IoT. The key principle of EH is that it allows wireless devices to harvest energy from RF signals to support their operations. There are three main schemes of EH including wireless power transfer (WPT), wireless-powered communication network (WPCN), and simultaneous wireless information and power transfer (SWIPT) as shown in Fig. 1 . • Wireless power transfer (WPT): In this scheme, the power transmitter simply transmits energy to the user devices without information. The energy is used to charge the devices' batteries. WPT has many practical applications such as home electronics, medical implants, electric vehicles, and wireless grid BIB001 . • Wireless-powered communication network (WPCN): This scheme allows the user devices to harvest energy, and then use the energy to actively transmit data. In this context, wireless devices can be developed for future applications such as IoT or, more generally, Internet of Everything BIB002 . Fig. 1 . Paradigms for wireless energy harvesting schemes BIB004 . • Simultaneous wireless information and power transfer (SWIPT): By using a hybrid design, in SWIPT, the power transmitter can transfer energy and information wirelessly to the user devices at the same time. The users then can choose to harvest energy or decode information sent from the power transmitter by simply switching between harvesting and decoding modules, thereby achieving a high energy-information transmission efficiency BIB004 . Although possessing many advantages, these energy harvesting schemes still have limitations when adopted in lowcost and low-power networks, e.g., WSNs and IoT. For example, in WPCNs, the users may require a long time to harvest enough RF energy to transmit data, thereby limiting the system performance. Resource scheduling and M2M communications are also major issues for SWIPT BIB003 , BIB005 . More importantly, equipping active RF transmission components increases the cost and complexity of the devices' circuits. This may not be suitable for large-scale and low-cost wireless communications networks. ABCS is introduced as an alternative solution to significantly improve the network performance. As shown in Table I , although many surveys in the literature have been focused on WPT, WPCN, and SWIPT, there is none for ABCSs.
Ambient Backscatter Communications: A Contemporary Survey <s> B. Backscatter Communications Systems <s> Scatter radio achieves communication by reflection and requires low-cost and low-power RF front-ends. However, its use in wireless sensor networks (WSNs) is limited, since commercial scatter radio (e.g. RFID) offers short ranges of a few tens of meters. This work redesigns scatter radio systems and maximizes range through non-classic bistatic architectures: the carrier emitter is detached from the reader. It is shown that conventional radio receivers may show a potential 3dB performance loss, since they do not exploit the correct signal model for scatter radio links. Receivers for on-off-keying (OOK) and frequency-shift keying (FSK) that overcome the frequency offset between the carrier emitter and the reader are presented. Additionally, non-coherent designs are also offered. This work emphasizes that sensor tag design should accompany receiver design. Impact of important parameters such as the antenna structural mode are presented through bit error rate (BER) results. Experimental measurements corroborate the long-range ability of bistatic radio; ranges of up to 130 meters with 20 milliwatts of carrier power are experimentally demonstrated, with commodity software radio and no directional antennas. Therefore, bistatic scatter radio may be viewed as a key enabling technology for large-scale, low-cost and low-power WSNs. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Backscatter Communications Systems <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Backscatter Communications Systems <s> Ambient backscatter communication technology has been introduced recently, and is quickly becoming a promising choice for self-sustainable communication systems, as an external power supply or a dedicated carrier emitter is not required. By leveraging existing RF signal resources, ambient backscatter technology can support sustainable and independent communications and consequently open up a whole new set of applications that facilitate Internet of things (IoT). In this article, we study an integration of ambient backscatter with wireless powered communication networks (WPCNs). We first present an overview of backscatter communication systems with an emphasis on the emerging ambient backscatter technology. Then we propose a novel hybrid transmitter design by combining the advantages of both ambient backscatter and wireless powered communications. Furthermore, in the cognitive radio environment, we introduce a multiple access scheme to coordinate hybrid data transmissions. The performance evaluation shows that the hybrid transmitter outperforms traditional designs. In addition, we discuss open issues related to ambient backscatter networking. <s> BIB003
Backscatter communications systems can be classified into three major types based on their architectures: monostatic backscatter communications systems (MBCSs), BBCSs, and ABCSs as shown in Fig. 2. 1) Monostatic Backscatter Communications Systems: In an MBCS, e.g., an RFID system, there are two main components: a backscatter transmitter, e.g., an RFID tag, and a reader as shown in Fig. 2(a) . The reader consists of, in the same device, an RF source and a backscatter receiver. The RF source generates RF signals to activate the tag. Then, the backscatter transmitter modulates and reflects the RF signals sent from the RF source to transmit its data to the backscatter receiver. As the RF source and the backscatter receiver are placed on the same device, i.e., the tag reader, the modulated signals may suffer from a round-trip path loss BIB001 . Moreover, MBCSs can be affected by the doubly near-far problem. In particular, due to signal loss from the RF source to the backscatter transmitter, and vice versa, if a backscatter transmitter is located far from the reader, it can experience a higher energy outage probability and a lower modulated backscatter signal strength BIB002 . The MBCSs are mainly adopted for short-range RFID applications. 2) Bistatic Backscatter Communications Systems: Different from MBCSs, in a BBCS, the RF source, i.e., the carrier emitter, and the backscatter receiver are separated as shown in Fig. 2(b) . As such, the BBCSs can avoid the roundtrip path loss as in MBCSs. Additionally, the performance of the BBCS can be improved dramatically by placing carrier emitters at optimal locations. Specifically, one centralized backscatter receiver can be located in the field while multiple carrier emitters are well placed around backscatter transmitters. Consequently, the overall field coverage can be expanded. Moreover, the doubly near-far problem can be mitigated as backscatter transmitters can derive unmodulated RF signals sent from nearby carrier emitters to harvest energy and backscatter data BIB002 . Although carrier emitters are bulky and their deployment is costly, the manufacturing cost for carrier emitters and backscatter receivers of BBCSs is cheaper than that of MBCSs due to the simple design of the components BIB003 . 3) Ambient Backscatter Communications Systems: Similar to BBCSs, carrier emitters in ABCSs are also separated from backscatter receivers. Different from BBCSs, carrier emitters in ABCSs are available ambient RF sources, e.g., TV towers, cellular base stations, and Wi-Fi APs instead of using dedicated RF sources as in BBCSs. As a result, ABCSs have some advantages compared with BBCSs. First, because of using already-available RF sources, there is no need to deploy and maintain dedicated RF sources, thereby reducing the cost and power consumption for ABCSs. Second, by utilizing existing RF signals, there is no need to allocate new frequency spectrum for ABCSs, and the spectrum resource utilization can be improved. However, because of using modulated ambient signals for backscatter communications, there are some disadvantages in ABCSs compared with BBCSs. First, modulated ambient RF signals are unpredictable and dynamic, and act as direct interference to backscatter receivers, which largely limits the performance of an ABCS, unlike unmodulated ones of the BBCS, which can easily be eliminated before backscattered signal detection. Second, since ambient RF sources of ABCSs are not controllable, e.g., transmission power and locations, the design and deployment of an ABCS to achieve optimal performance are often more complicated than those of an BBCS.
Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> Analog-to-digital converters (ADCs) are ubiquitous, critical components of software radio and other signal processing systems. This paper surveys the state-of-the-art of ADCs, including experimental converters and commercially available parts. The distribution of resolution versus sampling rate provides insight into ADC performance limitations. At sampling rates below 2 million samples per second (Gs/s), resolution appears to be limited by thermal noise. At sampling rates ranging from /spl sim/2 Ms/s to /spl sim/4 giga samples per second (Gs/s), resolution falls off by /spl sim/1 bit for every doubling of the sampling rate. This behavior may be attributed to uncertainty in the sampling instant due to aperture jitter. For ADCs operating at multi-Gs/s rates, the speed of the device technology is also a limiting factor due to comparator ambiguity. Many ADC architectures and integrated circuit technologies have been proposed and implemented to push back these limits. The trend toward single-chip ADCs brings lower power dissipation. However, technological progress as measured by the product of the ADC resolution (bits) times the sampling rate is slow. Average improvement is only /spl sim/1.5 bits for any given sampling frequency over the last six-eight years. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> Passive UHF RFID tag consists of a microchip attached directly to an antenna. Proper impedance match between the antenna and the chip is crucial in RFID tag design. It directly influences RFID system performance characteristics such as the range of a tag. It is known that an RFID microchip is a nonlinear load whose complex impedance in each state varies with the frequency and the input power. This paper illustrates a proper calculation of the tag power reflection coefficient for maximum power transfer by taking into account of the changing chip impedance versus frequency. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> In this paper, we discuss antennas and propagation aspects in current passive UHF RFID systems. We consider a "reader-tag-reader" link and concentrate on each part of it: reader antennas, propagation channel, and tags. We include channel modeling equations and support our discussion with experimental measurements of tag performance in various conditions. We also provide a comprehensive literature review. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> Backscatter radio - wireless communication by modulating signals scattered from a transponder (RF tag) - is fundamentally different from conventional radio because it involves two distinct links: the power-up link for powering passive RF tags, and the backscatter link for describing backscatter communication. Because of severe power constraints on the RF tag, a thorough knowledge of the backscatter channel is necessary to maximize backscatter-radio and radio-frequency identification (RFID) system performance. This article presents four link budgets that account for the major propagation mechanisms of the backscatter channel, along with a detailed discussion of each. Use of the link budgets is demonstrated by a practical UHF RFID portal example. The benefits of future 5.8 GHz multi-antenna backscatter-radio systems are shown. An intuitive analogy for understanding the antenna polarization of RF tag systems is presented. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> A low-cost (6 Euro per sensor), low-power (in the order of $200~\mu \text{W}$ per sensor), with high communication range (on the order of 250 m), scatter radio sensor network is presented, for soil moisture monitoring at multiple locations. The proposed network utilizes analog frequency modulation in a bistatic network architecture (i.e., the emitter and reader are not colocated), while the sensors operate simultaneously, using frequency-division multiple access. In contrast to prior art, this paper utilizes an ultralow-cost software-defined radio reader and offers custom microstrip capacitive sensing with simple calibration, as well as modulation pulses for each scatter radio sensor with 50% duty cycle; the latter is necessary for scalable network designs. The overall root mean squared error below 1% is observed, even for the range of 250 m. This is another small (but concrete) step for the adoption of scatter radio technology as a key enabling technology for scalable, large-scale, low-power, and cost environmental sensor networking. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Fundamentals of Modulated Backscatter Communications <s> In recent few years, the antenna and sensor communities have witnessed a considerable integration of radio frequency identification (RFID) tag antennas and sensors because of the impetus provided by internet of things (IoT) and cyber-physical systems (CPS). Such types of sensor can find potential applications in structural health monitoring (SHM) because of their passive, wireless, simple, compact size, and multimodal nature, particular in large scale infrastructures during their lifecycle. The big data from these ubiquitous sensors are expected to generate a big impact for intelligent monitoring. A remarkable number of scientific papers demonstrate the possibility that objects can be remotely tracked and intelligently monitored for their physical/chemical/mechanical properties and environment conditions. Most of the work focuses on antenna design, and significant information has been generated to demonstrate feasibilities. Further information is needed to gain deep understanding of the passive RFID antenna sensor systems in order to make them reliable and practical. Nevertheless, this information is scattered over much literature. This paper is to comprehensively summarize and clearly highlight the challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view. Future trends are also discussed. The future research and development in UK are suggested as well. <s> BIB006
Despite differences in configurations, MBCSs, BBCSs, and ABCSs share the same fundamentals. In particular, instead of initiating their own RF transmissions as conventional wireless systems, a backscatter transmitter can send data to a backscatter receiver just by tuning its antenna impedance to reflect the received RF signals. Specifically, the backscatter transmitter maps its bit sequence to RF waveforms by adjusting the load impedance of the antenna. The reflection coefficient of the antenna is computed by BIB004 , BIB002 BIB006 BIB003 : where Z a is the antenna impedance, * is the complexconjugate operator, and i = 1, 2 represents the switch state. In general, the number of states can be greater than 2, e.g., 4 or 8 states. However, in backscatter communications systems, the two-state modulation is typically used because of its simplicity. By switching between two loads Z 1 and Z 2 as shown in Fig. 3 (a), the reflection coefficient can be shifted between absorbing and reflecting states, respectively. In the absorbing state, i.e., impedance matching, RF signals are absorbed, and this state represents bit '0'. Conversely, in the reflecting state, i.e., impedance mismatching, the RF signals are reflected, and this state represents bit '1'. This scheme is known as the load modulation. There are two ways to decode the modulated signals sent from the backscatter transmitter: (i) using analogto-digital converter (ADC) BIB001 and (ii) using an averaging mechanism. The ADC has been commonly used in backscatter communications systems, especially for RFID systems. The procedures of using the ADC to decode modulated signals are as follows. The backscatter receiver samples the received signals at the Nyquist-information rate of the ambient signals, e.g., TV signals. The received samples, i.e., y[n], at the backscatter receiver are expressed as follows: where x[n] are the samples of the TV signals received by the backscatter receiver, w[n] is the noise, α is the complex attenuation of the backscattered signals relative to the TV signals, and B[n] are the bits which are transmitted by the backscatter transmitter. Then, the average powers of N received samples are calculated by the backscatter receiver as follows: where B takes a value of '0' or '1' depending on the non-reflecting and reflecting states, respectively. As x[n] is uncorrelated with the noise w[n], (3) can be expressed as follows: Denote P as the average power of the received TV signals, i.e., Ignoring the noise, the average power at the backscatter receiver is |1 + α| 2 P and P when the backscatter transmitter is at the reflecting (B = 1) and non-reflecting (B = 0) states, respectively. Based on the differences between the two power levels, i.e., |1 + α| 2 P and P, the backscatter receiver can decode the data from the backscattered signals with a conventional digital receiver. However, the ADC component consumes a significant amount of power, and thus may not be feasible to use in ultra-low-power systems. Therefore, Liu et al. propose the averaging mechanism to decode the modulated signals without using ADCs and oscillators. Intuitively, at the beginning of each packet transmission, the transmitter sends a known preamble that the receiver detects using bit-level correlation on the digital hardware, i.e., the micro-controller. In conventional backscatter systems, e.g., RFID systems, a backscatter device, i.e., a tag, only correlates when it is powered by the reader. However, in the case that the receiver does not know when the transmitter is transmitting data, it might need to continuously correlate. This process is power-consuming and impractical for energy-constrained backscatter receivers. Liu et al. use a comparator to detect bit transitions through a predefined threshold. Once the receiver detects the bit transitions, it will start the correlation process. Additionally, an alternating 0-1 bit sequence is inserted before the preamble in order to allow the comparator to have sufficient leeway to wake up and then use traditional mechanisms to detect bit boundaries and perform framing. The preamble is followed by a header including the type of packet, destination and source addresses, and the length of packet. This is followed by the packet's data. Both the header and data include CRCs to detect errors. Specifically, the averaging mechanism requires only simple analog devices, i.e., an envelope average and a threshold calculator, at the backscatter receiver as shown in Fig. 3(b) . By averaging the received signals, the envelope circuit first smooths these signals. Then, the threshold calculator computes the threshold value which is the average of the two signal levels, and compares with the smoothed signals to detect bits '1' and '0'. After that demodulated bits are passed through a decoder to derive the original data. In backscatter communications systems, the backscatter transmitter and backscatter receiver do not require complex components such as oscillator, amplifier, filter, and mixer, which consume a considerable amount of energy. Thus, the backscatter communications systems have low-power consumption, low implementation cost, and thus are easy to implement and deploy. It is important to note that the duty-cycle, i.e., the percentage of the ratio of pulse duration to the total period of the waveform, can significantly affect the transmission performance. In particular, when the duty-cycle is close to 50%, the backscattered signal power will be maximized BIB005 . This is due to the fact that the 50% duty-cycle pulse consists of odd-order harmonics of the fundamental frequency. Hence, signals which are not close to 50% duty-cycle require more bandwidth, thereby limiting the capacity of the system. As a result, many works in the literature adopt 50% duty-cycle for backscatter communications.
Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> RF modulated backscatter (RFMB), also known as modulated radar cross section or sigma modulation, is a RF transmission technique useful for short-range, low-data-rate applications, such as nonstop toll collection, electronic shelf tags, freight container identification and chassis identification in automobile manufacturing, that are constrained to have extremely low power requirements. The small-scale fading observed on the backscattered signal has deeper fades than the signal from a traditional one-way link of the same range in the same environment because the fading on the backscattered signal is the product of the fading on the off-board-generated carrier times the fading on the reflected signal. This paper considers the continuous wave (CW) type of RFMB, in which the interrogator transmitter and receiver antennas are different. This two-way link also doubles the path loss exponent of the one-way link. This paper presents the cumulative distribution functions for the measured small-scale fading and the measured path loss for short ranges in an indoor environment at 2.4 GHz over this type of link. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> In this paper, an overview of antenna design for passive radio frequency identification (RFID) tags is presented. We discuss various requirements of such designs, outline a generic design process including range measurement techniques and concentrate on one practical application: RFID tag for box tracking in warehouses. A loaded meander antenna design for this application is described and its various practical aspects such as sensitivity to fabrication process and box content are analyzed. Modeling and simulation results are also presented which are in good agreement with measurement data. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> Passive UHF RFID tag consists of a microchip attached directly to an antenna. Proper impedance match between the antenna and the chip is crucial in RFID tag design. It directly influences RFID system performance characteristics such as the range of a tag. It is known that an RFID microchip is a nonlinear load whose complex impedance in each state varies with the frequency and the input power. This paper illustrates a proper calculation of the tag power reflection coefficient for maximum power transfer by taking into account of the changing chip impedance versus frequency. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> This cutting-edge book serves as a comprehensive introduction to RFID, offering you a detailed understanding of design essentials and applications, and providing a thorough overview of management issues. By comparing RFID with WLAN and Bluetooth, this practical resource shows you how RFID technology can help you overcome many design challenges and limitations in the field. The book explains the design of electronic circuits, antennas, interfaces, data encoding schemes, and complete RFID systems. Starting with the basics of RF and microwave propagation, you learn about major system components including tags and readers. This hands-on reference distills the latest RFID standards, and examines RFID at work in supply chain management, intelligent buildings, intelligent transportation systems, and tracking animals. RFID is controversial among privacy and consumer advocates, and this book looks at every angle concerning security, ethics, and protecting consumer data. From design details... to applications... to socio-cultural implications, this authoritative volume offers the knowledge you need to create an optimal RFID system and maximize its performance. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> Backscatter radio - wireless communication by modulating signals scattered from a transponder (RF tag) - is fundamentally different from conventional radio because it involves two distinct links: the power-up link for powering passive RF tags, and the backscatter link for describing backscatter communication. Because of severe power constraints on the RF tag, a thorough knowledge of the backscatter channel is necessary to maximize backscatter-radio and radio-frequency identification (RFID) system performance. This article presents four link budgets that account for the major propagation mechanisms of the backscatter channel, along with a detailed discussion of each. Use of the link budgets is demonstrated by a practical UHF RFID portal example. The benefits of future 5.8 GHz multi-antenna backscatter-radio systems are shown. An intuitive analogy for understanding the antenna polarization of RF tag systems is presented. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> RFID technology for use in real-time object identification is being rapidly adopted in several fields such as logistic, automotive, surveillance, automation systems, etc. [1]. A radiofrequency identification (RFID) system consists of readers and tags applied to objects. The reader interrogates the tags via a wireless link to obtain the data stored on them. The cheapest RFID tags with the largest commercial potential are passive or semi-passive, and the energy necessary for tag–reader communication is harvested from the reader’s signal. Passive RFID tags are usually based on backscatter modulation, where the antenna reflection properties are changed according to information data [2]. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> This study introduces a 5.8 GHz RF-powered transceiver that includes a transmitter using an IF-based quadrature backscattering (IFQB) technique. The IFQB transmitter can produce a quadrature modulated signal without active RF circuits, such as PLL, local generation, and local distribution circuits. Thus, the IFQB technique can significantly reduce the power consumption while achieving denser constellations by the quadrature modulation. The RF-powered transceiver consists of the IFQB transmitter, an OOK receiver, and a power management unit (PMU) for RF powering. The transmitter and receiver operate under a 0.6 V power supply provided by the PMU to further reduce the power consumption. We fabricated a prototype RF-powered transceiver using a 65 nm Si CMOS process to confirm the validity of the proposed technique. During the measurements, the transmitter achieved 2.5 Mb/s with a 32-QAM modulation while consuming 113 µW. In addition, a wireless temperature sensing demonstration was conducted using the prototype sensor node with the presented RF-powered transceiver. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> In this work, we show how modulated backscatter signals can be crafted to yield channelized band-pass signals akin to those transmitted by many conventional wireless devices. As a result, conventional wireless devices can receive these backscattered signals without any modification (neither hardware nor software) to the conventional wireless device. We present a proof of concept using the Bluetooth 4.0 Low Energy, or BLE, standard widely available on smart phones and mobile devices. Our prototype backscatter tag produces three-channel bandpass frequency shift keying (FSK) packets at 1 Mbps that are indistinguishable from conventional BLE advertising packets. An unmodified Apple iPad is shown to correctly receive and display these packets at a range of over 9.4 m using its existing iOS Bluetooth stack with no changes whatsoever. We create all three BLE channels by backscattering a single incident CW carrier using a novel combination of fundamentalmode and harmonic-mode backscatter subcarrier modulation, with two of the band-pass channels generated by the fundamental mode and one of the band-pass channels generated by the second harmonic mode. The backscatter modulator consumes only 28.4 pJ/bit, compared with over 10 nJ/bit for conventional BLE transmitters. The backscatter approach yields over 100X lower energy per bit than a conventional BLE transmitter, while retaining compatibility with billions of existing Bluetooth enabled smartphones and mobile devices. <s> BIB009 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Antenna Design <s> Backscatter communication promises significant power and complexity advantages for Internet of Things devices such as radio frequency identification (RFID) tags and wireless sensor nodes. One perceived disadvantage of backscatter communication has been the requirement for specialized hardware such as RFID readers to receive backscatter signals. In this paper, we show how backscatter signals can be designed for compatibility with the Bluetooth 4.0 low energy (BLE) chipsets already present in billions of smart phones and tablets. We present a prototype microcontroller-based “BLE-Backscatter” tag that produces bandpass frequency-shift keying modulation at 1 Mb/s, enabling compatibility with conventional BLE advertising channels. Using a +23-dBm equivalent isotropically radiated power continuous wave (CW) carrier source, we demonstrate a range of up to 13 m between the tag and an unmodified Apple iPad Mini as well as a PC with the Nordic Semiconductor nRF51822 chipset. With the tag 1 m from the receiver, we demonstrate a range of up to 30 m between the CW carrier source and the tag. In both cases, the existing Bluetooth stack was used, with no modifications whatsoever to hardware, firmware, or software. The backscatter tag consumes only 1.56 nJ/b, over $6\times $ less than the lowest power commercial Bluetooth transmitters. <s> BIB010
In a backscatter communications system, an antenna is an essential component used to receive and backscatter signals. Thus, the design of the antenna can significantly affect the performance of the backscatter communications system. The maximum practical distance between the backscatter transmitter and the RF source, 1 of the system can be calculated by the Friis equation BIB002 as follows: where λ is the wavelength, P t is the power transmitted by the RF source, G t (θ, ϕ) and G r (θ, ϕ) are the gain of the transmit antenna and the gain of the receive antenna on the angles (θ, ϕ), respectively. P th is the minimum threshold power that is necessary to provide sufficient power to the backscatter transmitter chip attached to the antenna of the backscatter receiver, p is the polarization efficiency, and τ is the power transmission coefficient based on antenna impedance and chip impedance of the backscatter transmitter. Accordingly, it is important to adjust and set these parameters to achieve optimal performance for the backscatter communications system. 1) Operating Frequency: In a backscatter communications system, the operating frequency of an antenna varies in a wide range depending on many factors such as local regulations, target applications, and the transmission protocols BIB003 , [27] . For example, RFID systems operate at the frequency ranging from the low frequency band, i.e., 125 kHz -134.2 kHz, and the high frequency band, i.e., 13.56 MHz, to the ultra high frequency (UHF) band, i.e., 860 MHz -960 MHz, and the super high frequency (SHF) band, i.e., 2.4 GHz -2.5 GHz and 5.725 GHz -5.875 GHz [27] , BIB004 . Most of the recent RFID systems adopt EPC Global Class 1 Gen 2 and ISO 18000-6c as standard regulations for designs in UHF. However, the deployed frequency is dissimilar in different regions, e.g., 866.5 MHz in Europe, 915 MHz in North America, and 953 MHz in Asia BIB003 , BIB004 . It is important to note that increasing the operating frequency results in a higher power consumption and a more complicated design for active RF circuits BIB008 . However, in a backscatter communications system, the backscatter transmitter antenna does not contain active RF circuits, and thus the power consumption may negligibly increase for higher frequencies. Therefore, several works in the literature suggest that backscatter communications systems have some benefits when operating in the SHF band as follows: • By backscattering SHF signals, backscatter communications systems can be compatible with billions of existing Bluetooth and Wi-Fi devices BIB009 . Hence, it is highly potential to capitalize the ubiquitous characteristics of conventional wireless systems to support low-cost, lowpower backscatter communications systems BIB010 BIB001 BIB007 . • As the operating frequency of the backscatter transmitters increases, the half-wave dipole, i.e., a half of wavelength, is reduced, e.g., 16 cm at 915 MHz, 6 cm at 2.45 GHz, and 2.5 cm at 5.79 GHz BIB005 . Hence, the size of the antenna can be greatly shrunk at the SHF band BIB008 . Thus, this increases the antenna gain and object immunity . As the antenna is smaller, the backscatter transmitter size becomes smaller, thereby reducing the backscatter receiver's size and making it possible to be embedded on mobile and hand-held readers . • As the SHF band has more available bandwidth than that of the UHF band, backscatter communications systems are able to operate on the spread spectrum to increase the data rate . Recently, ultra-wideband (UWB) backscatter technology has been introduced , BIB006 . The UWB system can operate with instantaneous spectral occupancy of 500 MHz or a fractional bandwidth of more than 20% . The key idea of the UWB system is that the UWB signals are generated by driving the antenna with very short electrical pulses, i.e., one nanosecond or less. As such, the bandwidth of transmitted signals can increase up to one or more GHz. Hence, the UWB avoids the multi-path fading effect, thereby increasing the robustness and reliability of backscatter communications systems. Furthermore, as the UWB system operates at baseband, it is free of sine-wave carriers and does not require intermediate frequency processing. This can reduce the hardware complexity and power consumption. 2) Impedance Matching: The impedance matching (mismatching) between the chip impedance, i.e., the load impedance, and the antenna impedance is required to ensure that most of the RF signals are absorbed (reflected) in the absorbing (reflecting) state. Thus, finding suitable values of the antenna impedance and the chip impedance is critical in the antenna design. The complex chip impedance and antenna impedance are expressed as follows BIB002 , : where R c and R a are the chip and antenna resistances, respectively, and X c and X a are the chip and antenna reactances, respectively. The chip impedance Z c is hard to change due to technological limitations . This stems from the fact that Z c is a function of the operating frequency and the power received by the chip P c BIB003 . As a result, changing the antenna impedance is more convenient in performing the impedance matching. P c can be represented by the power received at the antenna P a and the power transmission coefficient τ as P c = P a τ . Here, τ is expressed as follows BIB002 , : The closer τ to 1, the better the impedance matching between the backscatter transmitter chip and antenna. The impedance matching will be perfect when τ = 1. Thus, based on (7) , the antenna impedance can be easily determined to achieve the perfect impedance matching, i.e., τ = 1 when Z a = Z * c .
Ambient Backscatter Communications: A Contemporary Survey <s> 3) Antenna Gain: <s> The most-up-to-date resource available on antenna theory and design. Expanded coverage of design procedures and equations makes meeting ABET design requirements easy and prepares readers for authentic situations in industry. New coverage of microstrip antennas exposes readers to information vital to a wide variety of practical applications.Computer programs at end of each chapter and the accompanying disk assist in problem solving, design projects and data plotting.-- Includes updated material on moment methods, radar cross section, mutual impedances, aperture and horn antennas, and antenna measurements.-- Outstanding 3-dimensional illustrations help readers visualize the entire antenna radiation pattern. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> 3) Antenna Gain: <s> This cutting-edge book serves as a comprehensive introduction to RFID, offering you a detailed understanding of design essentials and applications, and providing a thorough overview of management issues. By comparing RFID with WLAN and Bluetooth, this practical resource shows you how RFID technology can help you overcome many design challenges and limitations in the field. The book explains the design of electronic circuits, antennas, interfaces, data encoding schemes, and complete RFID systems. Starting with the basics of RF and microwave propagation, you learn about major system components including tags and readers. This hands-on reference distills the latest RFID standards, and examines RFID at work in supply chain management, intelligent buildings, intelligent transportation systems, and tracking animals. RFID is controversial among privacy and consumer advocates, and this book looks at every angle concerning security, ethics, and protecting consumer data. From design details... to applications... to socio-cultural implications, this authoritative volume offers the knowledge you need to create an optimal RFID system and maximize its performance. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> 3) Antenna Gain: <s> In this paper, we discuss antennas and propagation aspects in current passive UHF RFID systems. We consider a "reader-tag-reader" link and concentrate on each part of it: reader antennas, propagation channel, and tags. We include channel modeling equations and support our discussion with experimental measurements of tag performance in various conditions. We also provide a comprehensive literature review. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> 3) Antenna Gain: <s> The majority of RFID tags are linearly polarized dipole antennas, but a few use a planar, dual-dipole antenna that facilitates circular polarization, but requires a three-terminal IC. In this paper, we present a novel way to achieve circular polarization with a planar antenna using a two-terminal IC. We present an intuitive methodology for design, and perform experiments that validate circular polarization. The results show that the tag exhibits strong circular polarization, but the precise axial ratio of the tag remains uncertain due to lack of precision in the experimental system. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> 3) Antenna Gain: <s> Backscatter radio - wireless communication by modulating signals scattered from a transponder (RF tag) - is fundamentally different from conventional radio because it involves two distinct links: the power-up link for powering passive RF tags, and the backscatter link for describing backscatter communication. Because of severe power constraints on the RF tag, a thorough knowledge of the backscatter channel is necessary to maximize backscatter-radio and radio-frequency identification (RFID) system performance. This article presents four link budgets that account for the major propagation mechanisms of the backscatter channel, along with a detailed discussion of each. Use of the link budgets is demonstrated by a practical UHF RFID portal example. The benefits of future 5.8 GHz multi-antenna backscatter-radio systems are shown. An intuitive analogy for understanding the antenna polarization of RF tag systems is presented. <s> BIB005
Antenna gain is the amount of power transmitted in the direction of peak radiation to an isotropic source [40] . In general, the higher antenna gain leads to the longer range of transmission. Thus, it is important to calculate the antenna gain based on the target communication distance when designing the antenna [41] . However, as the price of a high-gain antenna is more expensive and its size is larger than that of a low-gain antenna, the high-gain antenna is not always a feasible and economical choice for implementation. In particular, for the scenario in which the backscatter transmitters are not far away from the backscatter receiver, or information about the direction of incoming signals is not available, lowgain antennas are more preferred [41] , [42] . Another important factor in designing the antenna is the on-object gain penalty, i.e., the gain penalty loss. This loss represents the reduction of antenna gain due to the material attachment BIB002 , . The on-object gain penalty depends on different factors such as material properties, object geometry, frequency, and antenna types. Hence, it is difficult to directly calculate the on-object gain penalty. Currently, a common and effective method to determine the on-object gain penalty is through simulations and measurements . 4) Polarization: Polarization, also known as orientation, is the curve traced by an end point of the vector to represent the instantaneous electric field BIB001 . In other words, it describes how the direction and magnitude of the field vector change over time. According to the shape of the trace, the polarization is classified into linear, circular, and elliptical groups. The power received at the antenna is maximized when the polarization of the incident wave is matched to that of the antenna. Thus, orientations of the backscatter receiver and the backscatter transmitter can significantly affect the received power and the range of the transmission. For example, when the antennas of backscatter receiver and backscatter transmitter are placed parallelly, the received power at the antennas is maximized. Otherwise, if the backscatter transmitter's antenna is displaced by 90 • , i.e., complete polarization mismatch, it is unable to communicate with the backscatter receiver. This is known as the polarization mismatch problem BIB002 . The polarization mismatch problem is an important issue, which needs to be carefully considered when designing the antenna, as an orientation of the backscatter transmitter is usually arbitrary . Several works try to solve this problem. One of the effective solutions is transmitting a circularly polarized wave from the reader in the monostatic system BIB003 , BIB004 . In this way, the uplink polarization mismatch and downlink polarization mismatch are both equal to 3dB . Thereby, the backscatter transmitter is able to communicate with the backscatter receiver regardless of their orientation. Griffin and Durgin BIB005 implement two linearly-polarized antennas, which are oriented at 45 • with respect to each other, on the backscatter transmitter. By doing this, the complete polarization mismatch problem can be largely avoided.
Ambient Backscatter Communications: A Contemporary Survey <s> E. Channel Coding and Decoding <s> Basic Principles of Radiofrequency Identification Antennas for RFID Transponders Transponders Antennas for Interrogators Interrogators Interrogator Communication and Control The Air Communication Link Commands for Transponders <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Channel Coding and Decoding <s> In this work, the use of Space-Time Codes in passive Radio Frequency IDentification (RFID) systems is explored; this is feasible whenever a tag possesses multiple antennas. Information is encoded across the multiple tag antennas and received by an RFID reader, also typically equipped with multiple antennas. The nature of passive RFID induces a unique fading channel known as the dyadic backscatter channel, which differs statistically compared to the canonical Rayleigh fading channel. We introduce a modified dyadic channel for RFID backscatter that adequately captures the space-time coding paradigm. We then propose known orthogonal space-time codes and derive an upper bound on the the pairwise error probability (PEP), leading to estimates of the (asymptotic) diversity order. Interestingly, the diversity order is shown to depend only on the number of tag antennas but not the number of receive antennas; the resultant performance trade-offs is discussed. Lastly, simulation of the symbol error rates for different channel configurations are conducted to validate the analysis. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Channel Coding and Decoding <s> Encoding techniques are becoming more important in communication. Techniques such as Miller, Manchester, and FM0 encoding can be used in various applications. Each technique has different operations based on their needs. Each encoding scheme should be used without losing any of its parameters. The finite state machine can be used for all encodings, because at a time the input has given the corresponding output can be occurred due to this. So speed can be increased. The fully-reused VLSI architecture of FM0, Manchester, and Miller encoders has reduced the number of transistors and maintains the DC balance. The simulation results of Xilinx indicate successful functions. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Channel Coding and Decoding <s> Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Channel Coding and Decoding <s> With low monetary cost and minimal energy consumption, communications by means of reflection and scatter radio have emerged as key enabler for low-cost, large-scale and dense ubiquitous wireless sensor network applications. This work maximizes scatter radio communication range by (a) proposing a novel coherent receiver of frequency-shift keying (FSK) modulation for the bistatic scatter radio channel (i.e., carrier emitter and receiver are dislocated) and (b) employing specific short block-length cyclic error-correcting codes. Despite the presence of three unknown channel links due to the bistatic setup and multiple unknown scatter radio-related parameters, the proposed receiver vastly improves BER performance compared to state-of-the-art bistatic scatter radio receivers. Experimental corroborating results are offered, with a commodity software-defined radio (SDR) reader, a custom scatter radio tag and omnidirectional antennas. Tag-to-reader ranges up to 150 meters are reported with as little as 20 milliWatt transmission power, offering range extension of approximately 10 additional meters compared to state-of-the-art bistatic receivers. <s> BIB005
Channel coding, i.e., coding in the baseband, is a process that matches a message and its signal representation to the characteristics of the transmission channel. The main purpose of the coding process is to ensure reliable transmissions by protecting the message from interference, collision, and intentional modification of certain signal characteristics . At the backscatter receiver, the encoded baseband signals are decoded to recover the original message and detect any transmission errors. In backscatter communications systems, many conventional coding techniques can be adopted such as non-return-to-zero (NRZ), Manchester, Miller, and FM0 , BIB001 . • NRZ code: Bit '1' is represented by high signals and bit '0' is represented by low signals. • Manchester code: Bit '1' is represented by a negative transition, i.e., from a high level to a low level, in the middle of the bit period. Bit '0' is represented by a positive transition, i.e., a low level to a high level, at the start of the clock. • Miller code: Bit '1' is represented by a transition of either high to low levels or low to high levels in the half-bit period, while bit '0' is represented by the continuance of the bit '1' level over the next bit period . • FM0 code: The phases of the baseband signals are all inverted at the beginning of each symbol. Bit '0' has a transition in the middle of the clock. In contrast, bit '1' has no transition during the symbol period , BIB003 . NRZ and Manchester are the two simple channel coding techniques and widely adopted in backscatter communications systems, especially in RFID systems , BIB001 . However, the NRZ code has a limitation when the transmitted data has a long string of bits '1' or '0' and the Manchester code requires more bits to be transmitted than that in the original signals. Thus, existing backscatter communications systems, i.e., UHF Class 1 Gen 2 RFID, BBCSs, and ABCSs, usually adopt the Miller and FM0 channel coding techniques due to their advantages such as enhanced signal reliability, reduced noise, and simplicity , BIB003 . Nonetheless, as backscatter communications systems are emerging rapidly in terms of the application, technology, and scale, the conventional channel coding techniques may not meet the emerging requirements such as high data rates, long communication range, and robustness. Hence, several novel coding techniques are proposed. Boyer and Roy BIB002 introduce an orthogonal space-time block code (OSTBC) to improve the data rate and reliability of RFID systems. The key idea of the OSTBC is to transmit data through multiple orthogonal antennas, i.e., multiple-input multiple-output technology. In particular, this channel coding scheme transmits several symbols simultaneously which are spread into block codes over space and time. As such, the OSTBC achieves a maximum diversity order with linear decoding complexity, thereby improving the performance of the system. Durgin and Degnan highlight that the FM0 coding used in ISO 18000-6C standard for UHF RFID tags is simple, but may not achieve maximum throughput. The authors then propose a balanced block code to increase the throughput while maintaining the simplicity of the system. To do so, the balanced block code calculates the frequency spectrum for each of the resulting balanced codewords. 2 Then, the codewords with the deepest spectral nulls at direct current are selected and assigned to a Grey-coded ordered set of the input bits. If the Hamming distance between the codeword and its nonadjacent neighbor is lower than that between the codeword and its adjacent neighbor, the current codeword and its adjacent neighbor are swapped. As a result, the current codeword is placed next to its non-adjacent neighbor. This procedure achieves a local optimum that minimizes the bit errors. The experimental results demonstrate that the balanced block code increases the throughput by 50% compared to the conventional channel coding techniques, e.g., FM0. In BBCSs, to deal with the interleaving of backscatter channels, an efficient encoding technique, namely short blocklength cyclic channel code, is developed BIB005 . In particular, based on the principle of the cyclic code , this technique encodes data by associating the code with polynomials. Thus, this short block-length cyclic channel code can be performed efficiently by using a simple shift register. The experimental results demonstrate that the proposed encoding technique can support communication ranges up to 150 meters. Parks et al. BIB004 introduce μcode, a low-power encoding technique, to increase the communication range and ensure concurrent transmissions for ABCSs. Instead of using a pseudorandom chip sequence, μcode uses a periodic signal to represent the information. In this way, the transmitted signals can be detected at the backscatter receiver without any phase synchronization when the receiver knows the frequency of the sinusoidal signals. The authors also note that the backscatter transmitter cannot transmit sine waves as it supports only two states, i.e., absorbing and reflecting states. Hence, a periodic alternating sequence of bits "0" and "1" is adopted. Without the need for the synchronization, μcode reduces the energy consumption as well as the complexity of the backscatter receiver. Through the experiments, the authors demonstrate that μcode enables long communication ranges, i.e., 40 times longer than that of conventional backscatter communications systems, and also supports multiple concurrent transmissions.
Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> We provide an overview of our experimental system, testing in practice a sensor communicating through backscatter at a range of approximately 15 meters indoors, with 5 mW transmission power at 10 bits per second. Our system is designed for simultaneous reception of signals continuously radio-backscattered from several ultra low-cost sensors. This work highlights the idiosyncracies of the backscatter channel and presents a proof- of-concept demonstration of backscatter radio for wireless sensor networks, especially when low bit-rate, ultra low-cost sensors are required. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Backscatter radio systems, including high frequency radio frequency identification (RFID), operate in the dyadic backscatter channel - a two-way pinhole channel that has deeper small-scale fades than that of a conventional one-way channel. This paper shows that pinhole diversity is available in a rich scattering environment caused by modulating backscatter with multiple RF tag antennas - no diversity combining at the reader, channel knowledge, or signaling scheme change is required. Pinhole diversity, along with increased RF tag scattering aperture, can cause up to a 10 dB reduction in the power required to maintain a constant bit-error-rate for an RF tag with two antennas. Through examples, it is shown that this gain results in increased backscatter radio system communication reliability and up to a 78% increase in RF tag operating range. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> This work examines the idea of dislocating the carrier transmission from the tag-modulated carrier reception, i.e. bi-static rather than mono-static backscatter radio. In that way, more than one carrier transmitters can be distributed in a given geographical area and illuminate a set of RF tags/sensors that modulate and scatter the received carrier towards a single software-defined receiver. The increased number of carrier transmitters and their distributed nature assists tags to be potentially located closer to one carrier transmitter and thus, improves the power of the scattered signals towards the receiver. Specifically, this work a) carefully derives near-optimal detectors for bi-static backscatter radio and on/off keying (OOK) tag modulation (which is widely used in commercial tags), b) analytically calculates their bit error rate (BER) performance, and c) experimentally tests them in practice with a custom bistatic backscatter radio link. As a collateral dividend, it is shown that the non-linear processing of the proposed receivers requires certain attention on the utilized tag design principles, commonly overlooked in the literature, validating recently reported theoretical results on the microwave domain. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Radio Frequency Identification (RFID) systems, presently standardized under EPC Global Class-1 Gen-2, have attracted increasing interest as the next-generation technology for tagged object identification. One of the important objectives of this work is to highlight the fact that the performance of the backscatter uplink (determined by constellation choice and forward error correction) is coupled to the downlink via the power harvesting functionality. The concept of normalized power loss per bit is introduced for such RFID communication systems to capture the consequent trade-offs, that form the crux of the results. We explore the use of higher dimensional (4-QAM) modulation schemes in future RFID systems (beyond current binary modulation in Class-1 Gen 2) as a means to improve uplink bit rate. However, this results in significantly increased normalized power loss vis-a-vis 2-PSK, suggesting a role for FEC coding. New coded modulation schemes - based on unequal error protection - are proposed that provides additional degrees of freedom (via choice of code parameters) to trade-off spectral efficiency with normalized power loss. This is explored and quantified, resulting in design recommendations. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Dense monitoring of environmental parameters (e.g. air/soil humidity, ambient temperature) is critical in precision agriculture, urban area monitoring and environmental modeling applications. In this paper, the design of a novel wireless sensor network (WSN) is proposed, consisting of low-power and low-cost sensor nodes, deployed in a bistatic architecture (i.e. carrier emitter in a different location than the receiver) and achieving long-range backscatter communication. The tags modulate sensor information using analog frequency modulation (FM) and frequency division multiple access (FDMA) at the subcarrier frequency, even though a single carrier is assumed. In sharp contrast to prior art, the developed backscatter sensor network performs environmental monitoring over a relatively wide area. A proof-of-concept prototype WSN application has been developed for capacitive relative humidity (RH) sensing, with 1.5 mW per tag, 0.9 RMSE and range on the order of 50 m. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> For applications that require large numbers of wireless sensors spread in a field, backscatter radio can be utilized to minimize the monetary and energy cost of each sensor. Commercial backscatter systems such as those in radio frequency identification (RFID), utilize modulation designed for the bandwidth limited regime, and require medium access control (MAC) protocols for multiple access. High tag/sensor bitrate and monostatic reader architectures result in communication range reduction. In sharp contrast, sensing applications typically require the opposite: extended communication ranges that could be achieved with bitrate reduction and bistatic reader architectures. This work presents non-coherent frequency shift keying (FSK) for bistatic backscatter radio; FSK is appropriate for the power limited regime and also allows many RF tags/sensors to convey information to a central reader simultaneously with simple frequency division multiplexing (FDM). However, classic non-coherent FSK receivers are not directly applicable in bistatic backscatter radio. This work a) carefully derives the complete signal model for bistatic backscatter radio, b) describes the details of backscatter modulation with emphasis on FSK and its corresponding receiver, c) proposes techniques to overcome the difficulties introduced by the utilization of bistatic architectures, such as the carrier frequency offset (CFO), and d) presents bit error rate (BER) performance for the proposed receiver and carrier recovery techniques. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Ambient backscatter is a new communication technology that utilizes ambient radio frequency signals to enable battery-free devices to communicate with each other. In this paper, we study the problem of signal detection and bit error rate (BER) performance for this new communication system where the differential encoding is adopted to eliminate the necessity of channel estimation. We formulate a new transmission model, design the data detection approach, and derive the optimal/approximate closed-form detection thresholds. In addition, the performance at high signal-to-noise region (SNR) is also analyzed, where the lower and the upper bounds of BERs are found. Simulation results are then provided to corroborate our theoretical studies. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> In this work, we show how modulated backscatter signals can be crafted to yield channelized band-pass signals akin to those transmitted by many conventional wireless devices. As a result, conventional wireless devices can receive these backscattered signals without any modification (neither hardware nor software) to the conventional wireless device. We present a proof of concept using the Bluetooth 4.0 Low Energy, or BLE, standard widely available on smart phones and mobile devices. Our prototype backscatter tag produces three-channel bandpass frequency shift keying (FSK) packets at 1 Mbps that are indistinguishable from conventional BLE advertising packets. An unmodified Apple iPad is shown to correctly receive and display these packets at a range of over 9.4 m using its existing iOS Bluetooth stack with no changes whatsoever. We create all three BLE channels by backscattering a single incident CW carrier using a novel combination of fundamentalmode and harmonic-mode backscatter subcarrier modulation, with two of the band-pass channels generated by the fundamental mode and one of the band-pass channels generated by the second harmonic mode. The backscatter modulator consumes only 28.4 pJ/bit, compared with over 10 nJ/bit for conventional BLE transmitters. The backscatter approach yields over 100X lower energy per bit than a conventional BLE transmitter, while retaining compatibility with billions of existing Bluetooth enabled smartphones and mobile devices. <s> BIB009 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> We present BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter devices and WiFi APs using ambient WiFi transmissions as the excitation signal. Specifically, we show that it is possible to design devices and WiFi APs such that the WiFi AP in the process of transmitting data to normal WiFi clients can decode backscatter signals which the devices generate by modulating information on to the ambient WiFi transmission. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system [27,25]. BackFi design is energy efficient, as it relies on backscattering alone and needs insignificant power, hence the energy consumed per bit is small. <s> BIB010 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> This study introduces a 5.8 GHz RF-powered transceiver that includes a transmitter using an IF-based quadrature backscattering (IFQB) technique. The IFQB transmitter can produce a quadrature modulated signal without active RF circuits, such as PLL, local generation, and local distribution circuits. Thus, the IFQB technique can significantly reduce the power consumption while achieving denser constellations by the quadrature modulation. The RF-powered transceiver consists of the IFQB transmitter, an OOK receiver, and a power management unit (PMU) for RF powering. The transmitter and receiver operate under a 0.6 V power supply provided by the PMU to further reduce the power consumption. We fabricated a prototype RF-powered transceiver using a 65 nm Si CMOS process to confirm the validity of the proposed technique. During the measurements, the transmitter achieved 2.5 Mb/s with a 32-QAM modulation while consuming 113 µW. In addition, a wireless temperature sensing demonstration was conducted using the prototype sensor node with the presented RF-powered transceiver. <s> BIB011 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Scatter radio is a promising enabling technology for ultra-low power consumption and low monetary cost, largescale wireless sensor networks. The two most prominent scatter radio architectures, namely the monostatic and the bistatic, are compared. Comparison metrics include bit error probability under maximum-likelihood detection for the single-user case and outage probability for the multi-user case (including tight bounds). This work concretely shows that the bistatic architecture improves coverage and system reliability. Utilizing this fact, a bistatic, digital scatter radio sensor network, perhaps the first of its kind, using frequency-shift keying (FSK) modulation and access, is implemented and demonstrated. <s> BIB012 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> With low monetary cost and minimal energy consumption, communications by means of reflection and scatter radio have emerged as key enabler for low-cost, large-scale and dense ubiquitous wireless sensor network applications. This work maximizes scatter radio communication range by (a) proposing a novel coherent receiver of frequency-shift keying (FSK) modulation for the bistatic scatter radio channel (i.e., carrier emitter and receiver are dislocated) and (b) employing specific short block-length cyclic error-correcting codes. Despite the presence of three unknown channel links due to the bistatic setup and multiple unknown scatter radio-related parameters, the proposed receiver vastly improves BER performance compared to state-of-the-art bistatic scatter radio receivers. Experimental corroborating results are offered, with a commodity software-defined radio (SDR) reader, a custom scatter radio tag and omnidirectional antennas. Tag-to-reader ranges up to 150 meters are reported with as little as 20 milliWatt transmission power, offering range extension of approximately 10 additional meters compared to state-of-the-art bistatic receivers. <s> BIB013 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Devices that harvest power from radio-frequency (RF) signals are generally referred to as RF-powered devices. One emerging technology that enables RF-powered devices to communicate with others is ambient backscatter. In this paper, we study the problem of signal detection and analyse the uplink bit error rate (BER) performance for RF-powered devices utilizing ambient backscatter. Specifically, we build up a theoretical model for a communication system which consists of one reader and one tag. The tag employs ambient backscatter to communicate with the reader. Next we design an optimal detector which can minimize the BER and find the closed-form expression for the detection threshold. Noting that the optimal detector cannot result in equal BERs in detect “0” or “1”, therefore we design another detector that can achieve the same error probability in detecting “0” with that in detecting “1”. Moreover, we analyse the BER performance for both detectors. Finally, simulations are provided to corroborate the proposed studies. <s> BIB014 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> We present HitchHike, a low power backscatter system that can be deployed entirely using commodity WiFi infrastructure. With HitchHike, a low power tag reflects existing 802.11b transmissions from a commodity WiFi transmitter, and the backscattered signals can then be decoded as a standard WiFi packet by a commodity 802.11b receiver. Hitch-Hike's key invention is a novel technique called codeword translation, which allows a backscatter tag to embed its information on standard 802.11b packets by just translating the original transmitted 802.11b codeword to another valid 802.11b codeword. This allows any 802.11b receiver to decode the backscattered packet, thus opening the doors for widespread deployment of low-power backscatter communication using widely available WiFi infrastructure. We show experimentally that HitchHike can achieve an uplink throughput of up to 300Kbps at ranges of up to 34m and ranges of up to 54m where it achieves a throughput of around 200Kbps. <s> BIB015 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> We introduce inter-technology backscatter, a novel approach that transforms wireless transmissions from one technology to another, on the air. Specifically, we show for the first time that Bluetooth transmissions can be used to create Wi-Fi and ZigBee-compatible signals using backscatter communication. Since Bluetooth, Wi-Fi and ZigBee radios are widely available, this approach enables a backscatter design that works using only commodity devices. ::: We build prototype backscatter hardware using an FPGA and experiment with various Wi-Fi, Bluetooth and ZigBee devices. Our experiments show we can create 2-11 Mbps Wi-Fi standards-compliant signals by backscattering Bluetooth transmissions. To show the generality of our approach, we also demonstrate generation of standards-complaint ZigBee signals by backscattering Bluetooth transmissions. Finally, we build proof-of-concepts for previously infeasible applications including the first contact lens form-factor antenna prototype and an implantable neural recording interface that communicate directly with commodity devices such as smartphones and watches, thus enabling the vision of Internet connected implanted devices. <s> BIB016 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> In this paper, we assess the information-theoretic performance of a point-to-point link that exploits ambient backscattering to support green Internet-of-Thing (IoT) communications. In this framework, an IoT passive device transmits its information by reusing ambient radio-frequency signals emitted by an existing or legacy multicarrier communication system. After introducing the signal model of the relevant communication links, the information-theoretic capacity of both the legacy and backscatter systems is derived. It is found that, under reasonable operative conditions, the legacy system can turn the RF interference arising from backscattering into a form of multipath diversity, which can be exploited to increase its own performance. Moreover, it is shown that, even when it employs simple single-carrier modulation techniques, the backscatter system attains significant data rates over relatively short distances. <s> BIB017 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> In this paper, we investigate a unique phase cancellation problem that occurs in backscatter-based tag-to-tag (BBTT) communication systems. These are systems wherein two or more radio-less devices (tags) communicate with each other purely by reflecting (backscattering) an external signal (whether ambient or intentionally generated). A transmitting tag modulates baseband information onto the reflected signal using backscatter modulation. At the receiving tag, the backscattered signal is superimposed to the external excitation and the resulting signal is demodulated using envelope detection techniques. The relative phase difference between the backscatter signal and the external excitation signal at the receiving tag has a large impact on the envelope of the resulting signal. This often causes a complete cancellation of the baseband information contained in the envelope, and it results in a loss of communication between the two tags. This problem is ubiquitous in all BBTT systems and greatly impacts the reliability, robustness, and communication range of such systems. We theoretically analyze and experimentally demonstrate this problem for devices that use both ASK and PSK backscattering. We then present a solution to the problem based on the design of a new backscatter modulator for tags that enables multiphase backscattering. We also propose a new combination method that can further enhance the detection performance of BBTT systems. We examine the performance of the proposed techniques through theoretical analysis, computer simulations, and laboratory experiments with a prototype tag that we have developed. <s> BIB018 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> The Internet-of-Things (IoT) is an emerging concept of network connectivity anytime and anywhere for billions of everyday objects, which has recently attracted tremendous attention from both the industry and academia. The rapid growth of IoT has been driven by recent advancements in consumer electronics, wireless network densification, 5G communication technologies, and cloud-computing enabled big-data analytics. One of the key challenges for IoT is the limited network lifetime due to massive IoT devices being powered by batteries with finite capacities. The low-power and low-complexity backscatter communications (BackCom), which simply relies on passive reflecting and modulation an incident radio-frequency (RF) wave, has emerged to be a promising technology for tackling this challenge. However, the contemporary BackCom has several major limitations, such as short transmission range, low data rate, and uni-directional information transmission. In this article, we present an overview of the next generation BackCom by discussing basic principles, system and network architectures and relevant techniques. Lastly, we describe the IoT application scenarios with the next generation BackCom. <s> BIB019 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Ambient backscatter is an intriguing wireless communication paradigm that allows small devices to compute and communicate by using only the power they harvest from far-field radio-frequency (RF) signals in the air. Ambient backscattering devices reflect RF signals emitted by existing or legacy communications systems, such as digital TV broadcasting, cellular, or Wi-Fi ones, which are designed for transporting information and are not intended for RF energy transfer. This paper deals with mathematical modeling and performance analysis of wireless broadband networks operating over fading channels with ambient backscatter devices. After introducing a detailed signal model of the relevant communication links, we study the influence of physical parameters on the capacity of both legacy and backscatter channels, by considering different receiver architectures. We analytically show that, under reasonable operative conditions, a legacy system—employing an orthogonal frequency-division multiplexing (OFDM) modulation scheme—can turn the RF interference arising from the backscatter process into a form of multipath diversity that can be exploited to increase its performance. Moreover, our analysis proves that a backscatter system—transmitting one symbol per OFDM symbol of the legacy system—can achieve satisfactory data rates over relatively short distances, especially when the intended recipient of the backscatter signal is co-located with the legacy transmitter, i.e., they are on the same device. <s> BIB020 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> Ambient backscatter, an emerging communication mechanism where battery-free devices communicate with each other via backscattering ambient radio frequency (RF) signals, has achieved much attention recently because of its desirable application prospects in the Internet of Things. In this paper, we formulate a practical transmission model for an ambient backscatter system, where a tag wishes to send some low-rate messages to a reader with the help of an ambient RF signal source, and then provide fundamental studies of noncoherent symbol detection when all channel state information of the system is unknown. For the first time, a maximum likelihood detector is derived based on the joint probability density function of received signal vectors. In order to ease availability of prior knowledge of the ambient RF signal and reduce computational complexity of the algorithm, we design a joint-energy detector and derive its corresponding detection threshold. The analytical bit error rate (BER) and BER-based outage probability are also obtained in a closed form, which helps with designing system parameters. An estimation method to obtain detection-required parameters and comparison of computational complexity of the detectors are presented as complementary discussions. Simulation results are provided to corroborate theoretical studies. <s> BIB021 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Modulation and Demodulation <s> We study a novel communication technique, ambient backscatter, that utilizes radio frequency signals transmitted from an ambient source as both energy supply and information carrier to enable communications between low-power devices. Different from existing noncoherent schemes, we here design the semi-coherent detection, where channel-related parameters can be obtained from unknown data symbols and a few pilot symbols. In order to obtain a benchmark for overall detection, we first derive a maximum likelihood detector assuming a complex Gaussian ambient source, and the closed-form bit error rate (BER) is computed. To release the dependence on prior knowledge of the ambient source, we next derive a type of robust design, called an energy detector, with the ambient signal being either complex Gaussian or phase shift keying (PSK). The closed-form detection thresholds, analytical BERs, and outage probability are provided correspondingly. Interestingly, the complex Gaussian source would cause an error floor, while the PSK source does not, which brings nontrivial indication of constellation design as opposed to popular Gaussian-embedded literatures. We also propose an effective approach to estimate detection-required parameters rather than channels themselves. Numerical simulations are finally presented to verify theoretical results. <s> BIB022
Modulation is a process of varying one or more properties, i.e., frequency, amplitude, and phase, of carrier signals. At a backscatter receiver, by analyzing the characteristics of the received signals, we can reconstruct the original data by measuring the changes in reception phase, amplitude, or frequency, i.e., demodulation. Table III summaries the principle, advantages, disadvantages along with references of popular modulation schemes in backscatter communications systems. In general, there are three basic modulation schemes corresponding to the changes of the amplitude, frequency, and phase in the carrier signals, i.e., amplitude shift keying (ASK), frequency shift keying (FSK), and phase shift keying (PSK). These modulation schemes are commonly adopted in backscatter communications systems , BIB003 , BIB015 , BIB005 , BIB019 , BIB016 . In BBCSs, FSK is more favorable. In particular, as several backscatter transmitters in BBCSs may communicate with the backscatter receiver simultaneously, there is a need for a multiple access mechanism. Hence, several works choose FSK and frequency-division multiple access (FDMA) for BBCSs since the characteristics of FSK are perfectly fit with FDMA BIB008 , BIB009 , BIB005 , BIB001 . Furthermore, FSK is resilient to noise and signal strength variations . On the contrary, PSK is mainly adopted in ambient backscatter systems BIB019 , BIB016 , BIB010 . Specifically, PSK can support high data rate transmissions since it transmits data in a small number of radio frequency cycles. Darsena et al. BIB017 compare the performance between PSK and ASK in different angle φ. 3 The numerical results show that the quadrature PSK (QPSK) modulation with φ = π/18 achieves the highest capacity while the 4-ASK modulation with φ = π/3 offers the lowest capacity. In BIB018 , a multi-phase backscatter technique is proposed for ASK and PSK to reduce the phase cancellation problem. The phase cancellation problem occurs when there is a relative phase difference between the carrier and backscattered signals received at the backscatter receiver BIB018 . Through the simulation and experimental results, the authors note that the performance of PSK is better than that of ASK. The reason is that the phase cancellation can be theoretically avoided completely if the difference between the phases of the two pair of impedances takes a value between 0 and π 2 . This can be easily achieved by using the PSK modulation scheme. Some other modulation schemes are also adopted in backscatter communications systems. In BIB011 , by using the n-quadrature amplitude modulation (QAM) scheme, i.e., 32-QAM, a passive RF-powered backscatter transmitter operating at 5.8 GHz can achieve 2.5 Mbps data rate at a distance of ten centimeters. Nevertheless, the n-QAM modulation is susceptible to noise, thereby resulting in the normalized power loss BIB010 . Boyer and Roy BIB004 measure the normalized power loss by analyzing the use of higher dimensional modulation schemes, e.g., 4-QAM or 8-QAM. The numerical results show that the normalized power loss is significantly increased from 2-QAM to 4-QAM. Therefore, the authors propose a novel QAM modulation scheme to combine QAM with unequal error protection to minimize the normalized power loss. Unequal error protection protects bits at different levels. In particular, bits that are more susceptible to errors will have more protection, and vice versa. Through the numerical results, the authors demonstrate that the normalized power loss is greatly reduced by using the proposed QAM modulation , , BIB002 . scheme. Vannucci et al. BIB001 introduce minimum-shift keying (MSK), i.e., a special case of FSK, to minimize interference at the backscatter receiver. The principle of MSK is that signals from the backscatter transmitter will be modulated at different sub-carrier frequencies. Through the experimental results, the authors demonstrate that the MSK modulation scheme can significantly minimize the collision at the backscatter receiver. At the backscatter receiver, there is a need to detect modulated signals from the backscatter transmitter. Many detection mechanisms have been proposed in the literature. Among them, the noncoherent detection is most commonly adopted because of its simplicity and effectiveness BIB006 , BIB020 , BIB021 , . In particular, the noncoherent detection does not need to estimate the carrier phase, thereby reducing the complexity of the backscatter receiver circuit. This detection mechanism is suitable for the ASK and FSK modulation schemes. However, the noncoherent detection offers only a low bitrate . Therefore, some works adopt the coherent detection to increase the bitrate BIB012 , BIB013 . Different from the noncoherent detection, the coherent detection requires knowledge about the carrier phase, resulting in a more complicated backscatter receiver circuit. The PSK modulation usually prefers the coherent detection since its phases are varied to modulate signals. It is also important to note that in ambient backscatter communications systems, as the ambient RF signals are indeterminate or even unknown, many existing works assume that the ambient RF signals follow zero-mean circularly symmetric complex Gaussian distributions. Then, maximum-likelihood (ML) detectors can be adopted to detect the modulated signals at the backscatter receiver BIB007 BIB022 BIB014 .
Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> A new multi-antenna or multiple-input-multiple-output (MIMO) configuration is introduced for RF modulated backscatter in a multipath environment. RF modulated backscatter is used by semipassive RF tags to transmit without a power amplifier, and is therefore appropriate for applications with extreme power constraints such as sensor communications, electronic shelf tags, electronic toll collection, and container identification. Channel information is used by the interrogator transmitter to provide transmit diversity. Multiple reflection antennas are used by the RF tags to reflect according to different data streams. Multiple interrogator receiver antennas provide multi-stream detection capability. Simulations show that range can be extended by a factor of four or more in the pure diversity configuration and that backscatter link capacity can be increased by a factor of ten or more in the spatial multiplexing configuration. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Modulated backscatter is an RF transmission technique useful for short-range, low-data-rate applications constrained to have extremely low power requirements, such as electronic shelf tags, RF tags, and some sensor applications. The small-scale fading observed on the backscattered signal has deeper fades than a signal from a traditional one-way link of the same range in the same environment because the fading on the backscattered signal is a product of the fading on the off-board generated carrier times the fading on the reflected signal. We present the first published reports of measured cumulative distribution functions for the small-scale fading at 2.4 GHz over this type of link. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> RF modulated backscatter (RFMB), also known as modulated radar cross section or sigma modulation, is a RF transmission technique useful for short-range, low-data-rate applications, such as nonstop toll collection, electronic shelf tags, freight container identification and chassis identification in automobile manufacturing, that are constrained to have extremely low power requirements. The small-scale fading observed on the backscattered signal has deeper fades than the signal from a traditional one-way link of the same range in the same environment because the fading on the backscattered signal is the product of the fading on the off-board-generated carrier times the fading on the reflected signal. This paper considers the continuous wave (CW) type of RFMB, in which the interrogator transmitter and receiver antennas are different. This two-way link also doubles the path loss exponent of the one-way link. This paper presents the cumulative distribution functions for the measured small-scale fading and the measured path loss for short ranges in an indoor environment at 2.4 GHz over this type of link. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Using new, analytic expressions for the envelope probability density function (PDF) of the M times L times N dyadic backscatter channel, this paper demonstrates that modulating backscatter using multiple RF tag antennas can reduce small-scale fading. Several practical design guidelines are presented to exploit this property of the channel and to increase the RF tag range and reliability without a change in the signaling scheme. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> High-frequency backscatter radio systems operate in the dyadic backscatter channel, a pinhole channel whose envelope probability density function and bit-error-rate performance are strongly affected by link envelope correlation - the envelope correlation between the forward and backscatter links of the dyadic backscatter channel. This paper shows that link envelope correlation is most detrimental for backscatter radio systems using co-located reader transmitter and receiver antennas and a single RF transponder antenna. It is shown that using separate reader antennas and multiple RF transponder antennas will decrease link envelope correlation effects and a near maximum bit-error-rate can be achieved with link envelope correlation less than 0.6. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Backscatter radio systems, including high frequency radio frequency identification (RFID), operate in the dyadic backscatter channel - a two-way pinhole channel that has deeper small-scale fades than that of a conventional one-way channel. This paper shows that pinhole diversity is available in a rich scattering environment caused by modulating backscatter with multiple RF tag antennas - no diversity combining at the reader, channel knowledge, or signaling scheme change is required. Pinhole diversity, along with increased RF tag scattering aperture, can cause up to a 10 dB reduction in the power required to maintain a constant bit-error-rate for an RF tag with two antennas. Through examples, it is shown that this gain results in increased backscatter radio system communication reliability and up to a 78% increase in RF tag operating range. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Backscatter radio - wireless communication by modulating signals scattered from a transponder (RF tag) - is fundamentally different from conventional radio because it involves two distinct links: the power-up link for powering passive RF tags, and the backscatter link for describing backscatter communication. Because of severe power constraints on the RF tag, a thorough knowledge of the backscatter channel is necessary to maximize backscatter-radio and radio-frequency identification (RFID) system performance. This article presents four link budgets that account for the major propagation mechanisms of the backscatter channel, along with a detailed discussion of each. Use of the link budgets is demonstrated by a practical UHF RFID portal example. The benefits of future 5.8 GHz multi-antenna backscatter-radio systems are shown. An intuitive analogy for understanding the antenna polarization of RF tag systems is presented. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> UHF and microwave backscatter RF-tag systems, including radio frequency identification (RFID) and sensor systems, experience multipath fading that can be more severe than that found in a conventional transmitter-to-receiver channel. Previous work has shown that multipath fading can be reduced on the modulated-backscatter signal received from the RF tag in a non-line-of-sight (NLOS) channel if more than one RF-tag antenna is used to modulate backscatter. This paper presents the first multipath fading measurements for backscatter tags using multiple antennas at 5.79 GHz - the center of the 5.725–5.850 GHz, unlicensed industrial, scientific, and medical (ISM) frequency band that may offer reliable operation for future, miniature RF tags. NLOS measurement results are presented as cumulative density functions (CDF) and fade margins for use in backscatter radio link budget analysis and a detailed description of the custom backscatter testbed used to take the measurements is provided. The measurements show that gains are available for multiple-antenna RF tags and results match well with gains predicted using the analytic fading distributions derived previously. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Multipath fading can be heavy for ultra-high frequency (UHF) and microwave backscatter radio systems used in applications such as radio frequency identification (RFID). This paper presents measurements of fading on the modulated signal backscattered from a transponder for backscatter radio systems that use multiple antennas at the interrogator and transponder. Measurements were performed at 5.8 GHz and estimates of the backscatter channel envelope distributions and fade margins were calculated. Results show that multipath fading can be reduced using multiple transponder antennas, bistatic interrogators with widely separated transmitter and receiver antennas, and conventional diversity combining at the interrogator receiver. The measured envelope distribution estimates are compared to previously derived distributions and show good agreement. <s> BIB009 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Radio Frequency Identification (RFID) is used in high scattering environments where deep fading exists. This makes diversity particularly interesting for this communication scenario. In this paper the potential of using multiple tag antennas for RFID-communication is shown. The bit error rate (BER) and packet error rate (PER) is presented, which include the backscattering answer of a RFID-Tag according to the EPC Class-1 Gen-2 protocol. The rates are regarded in combination with the relevant fading channel models for RFID communication such as the Rician- and the Dyadic Backscatter Channel. It is shown that the possible diversity gain from the signal according to EPC Class-1 Gen-2 protocol is several dB regarding the error rates. It is also shown that this diversity gain increases with the correlation of the forward and backward link and decreases with the usage of a more robust encoding scheme and a correlation between the several transmission paths. Additionally the performance of Multiple Input Single Output (MISO) system with different spatial and forward/backward correlation situations is regarded to have a detailed view on a correlated RFID transmission system using diversity. The performance of this model is verified, using simulations of this propagation system. <s> BIB010 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> The ambient backscatter technique is a communication technology that uses ambient radio frequency signals to enable battery-free devices to communicate with other device. This paper proposes the ambient backscatter technique using multiple antennas. Since the tag only plays a role of reflecting signals, a signal is transmitted with a power allocation in the case of multiple antennas. At the receiving end, the higher power signal is detected first via the received signal. Next, signal from other antenna is detected by using the first detected signal. Since the backscatter technique generally uses energy detection, it has a low data rate using a single antenna. The proposed method can obtain a higher data rate than conventional methods by using multiple antennas. Also, it can be usefully used for the Internet of Things system, which requires high data rate through the proposed backscatter method. <s> BIB011 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> In this paper, a detection scheme for the uplink transmission of Wi-Fi backscatter system is proposed. In the uplink transmission, a backscatter tag modulates signals from a Wi-Fi access point by backscattering. Since the reflection occurs, the backscattered signal experiences more attenuation than the direct Wi-Fi signals to a Wi-Fi reader. Most recent studies focus on effective detection techniques to increase the coverage of the backscatter tag in the poor signal to interference plus noise ratio environment. This paper proposes an improved detection scheme using multiple antennas at the reader. At the reader, the backscattered signals are received by multiple antennas and thresholds are determined appropriately for improvement of detection performance. Typically the methods using multiple antennas require channel information. However the channel estimation for the uplink of backscatter system is difficult since the backscattered signals are weak. The detection method of this paper does not require the channel information for the uplink. Simulation results show improved performance and possibility for the increase of coverage. <s> BIB012 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Backscatter Communications Channels <s> Ambient backscatter, an emerging communication mechanism where battery-free devices communicate with each other via backscattering ambient radio frequency (RF) signals, has achieved much attention recently because of its desirable application prospects in the Internet of Things. In this paper, we formulate a practical transmission model for an ambient backscatter system, where a tag wishes to send some low-rate messages to a reader with the help of an ambient RF signal source, and then provide fundamental studies of noncoherent symbol detection when all channel state information of the system is unknown. For the first time, a maximum likelihood detector is derived based on the joint probability density function of received signal vectors. In order to ease availability of prior knowledge of the ambient RF signal and reduce computational complexity of the algorithm, we design a joint-energy detector and derive its corresponding detection threshold. The analytical bit error rate (BER) and BER-based outage probability are also obtained in a closed form, which helps with designing system parameters. An estimation method to obtain detection-required parameters and comparison of computational complexity of the detectors are presented as complementary discussions. Simulation results are provided to corroborate theoretical studies. <s> BIB013
In the following, we describe general models of backscatter communication channels. Then, theoretical analyses and experimental measurements for the backscatter channels are discussed. 1) Backscatter Communications Channels: a) Basic backscatter channel: A general system model of a backscatter communications system consists of three main components: (i) an RF source, (ii) a backscatter receiver, and (iii) a backscatter transmitter as shown in Fig. 4 (a). Note that the RF source and the backscatter receiver can be in the same device, i.e., a reader, in the monostatic systems, or in different devices in BBCSs and ABCSs. To transmit signals to the backscatter receiver, the backscatter transmitter modulates the carrier signals, which are transmitted from the RF source through the forward link. Then, the modulated signals are transmitted to the backscatter receiver through the backscatter link. The modulated signals received at the backscatter receiver are expressed as follows : where h b (τ b ; t) is the baseband channel impulse of the backscatter link, i.e., the link between the backscatter transmitter and the backscatter receiver, h f (τ f ; t) is the baseband channel impulse of the forward link, i.e., the link between the RF source and the backscatter transmitter, s(t) is the information signals transmitted from the backscatter transmitter, x (t) is the carrier signals transmitted from the RF source, and n(t) is the noise. b) Dyadic backscatter channel: Recently, a dyadic backscatter channel model is derived to characterize the multiple antenna backscatter channels BIB007 , , BIB006 . As shown in Fig. 4(b) , multiple antennas are employed, i.e., M antennas at the RF source, L antennas at the backscatter transmitter, and N antennas at the backscatter receiver. Hence, the dyadic backscatter channel is also known as the M × L × N backscatter channel. Similar to the basic backscatter channel, the received signals at the backscatter receiver are expressed as follows BIB007 , , BIB006 : where y(t) is an N × 1 vector of received complex baseband signals, impulse response matrix of the backscatter link, and H f (τ f ; t) is the L × M complex baseband channel impulse response matrix of the forward link. S (t) is the backscatter transmitter's narrow band L × L signaling matrix, x (t) is an M × 1 vector of the signals transmitted from the RF source antennas, and n(t) is an N × 1 vector of noise components. The term dyadic represents the two-fold nature of a two-way channel and the matrix form of the modulated signals. In BIB001 , this channel is investigated in the context of semi-passive backscatter transmitters to achieve diversity and spatial multiplexing. The authors demonstrate that by using multiple antennas at both the backscatter transmitter and backscatter receiver, the communication range is significantly extended. The reason is that in the M × L × N backscatter channel, small-scale fading effects can be reduced BIB001 , BIB004 , thereby improving the performance of backscatter communications systems BIB005 . c) Link budgets for backscatter channels: In a backscatter communications system, there are two major link budgets, i.e., the forward link and the backscatter link budgets, that affect performance of the system (Fig. 4 ). In particular, the forward link budget is defined as the amount of power received by the backscatter transmitter, and the backscatter link budget is the amount of power received by the backscatter receiver . The forward link budget is calculated as follows: where P t is the power coupled into the backscatter transmitter, P T is the transmit power of the RF source, G T and G t are the antenna gains of the RF source and the backscatter transmitter, respectively. λ is the frequency wavelength, X is the polarization mismatch, τ is the power transmission coefficient, r f is the distance between the RF source and the backscatter transmitter, Θ is the backscatter transmitter's antennas on-object gain penalty, B is the path blockage loss, and F p is the forward link fade margin. The backscatter link budget is calculated as follows: where M is the modulation factor, r b is the distance between the backscatter transmitter and the backscatter receiver, X f and X b are the forward link and backscatter link polarization mismatches, respectively. B f and B b are the forward link and backscatter link path blockage losses, respectively, and F is the backscatter link fade margin. The link budgets will take different forms depending on the configurations, i.e., MBCSs, BBCSs, and ABCSs. However, the detail is beyond the scope of this survey. More information can be found in and . 2) Theoretical Analyses and Experimental Measurements: Based on the above models, many works focus on measuring and evaluating performance of backscatter channels. In BIB007 , by adopting different antenna materials, e.g., cardboard sheet, aluminum slab, or pine plywood, under the three configurations, the performance of the backscatter communications system is measured in terms of the link budgets. In particular, the authors demonstrate that reducing antenna impedance results in a small power transmission coefficient that may prevent the backscatter transmitter from turning on. It is also shown that the object attachment and multi-path fading may have significant effects on the performance of the system in terms of the communication range and bitrate between the backscatter transmitter and the backscatter receiver. The authors suggest that using multiple antennas operating at high frequencies provides many benefits such as increasing antenna gain and object immunity, and reducing small-scale fading to facilitate backscatter propagation. In BIB003 and BIB002 , the path loss and small-scale fading of backscatter communications systems are extensively investigated in an indoor environment. The authors demonstrate that the small-scale fading of the backscatter channel can be modeled as two uncorrelated traditional one-way fades, and the path loss of the backscatter channel is twice that of the one-way channel. The multiple-antenna backscatter channels are investigated in BIB005 and [100]- BIB008 . By using the cumulative distribution functions to determine the multi-path fading of backscatter channels, Griffin and Durgin BIB009 , BIB008 demonstrate that multi-path fading on the modulated backscatter signals can be up to 20 dB and 40 dB with line-of-sight and non-line-of-sight backscatter channels, respectively. However, this multi-path fading can be significantly reduced by using multiple antennas at the backscatter transmitter to modulate data BIB008 . Furthermore, Griffin and Durgin suggest that the dyadic backscatter channel with two antennas at the backscatter transmitter can improve the reliability of the system and increase the communication range by 78% with a bit-error rate (BER) of 10 −4 compared with basic backscatter channels. Another link budget that needs to be considered is the link envelope correlation. In particular, the link envelope correlation may have negative effects on the performance of the system by coupling fading in the forward and backscatter links even if fading in each link is uncorrelated. Griffin and Durgin BIB005 adopt probability density functions to analyze the link envelope correlation of the dyadic backscatter channel. The theoretical results show that using multiple antennas at the backscatter transmitter can reduce the link envelope correlation effect, especially for the system in which the RF source and the backscatter receiver are separated, i.e., BBCSs and ABCSs. Different from all the aforementioned works, some other works focus on measuring and analyzing the BER of backscatter communications BIB011 . Table IV shows the summary of BER versus SNR in different system setups. Obviously, many factors can affect the BER performance such as antenna configurations, detectors, channel coding, and modulation schemes. In general, using multiple antennas at the backscatter transmitter to modulate data can significantly improve the BER performance. For example, in BIB012 , by using 8 antennas at the backscatter transmitter, the BER of 10 −5 can be achieved at 50 dB of SNR. However, this may increase the complexity of the backscatter transmitter. Thus, we can reduce BER at the backscatter receiver through using novel channel coding and modulation as well as detection schemes. For example, in BIB013 , by adopting the 8-PSK modulation and a noncoherent detector, the authors can reduce BER to 10 −4 at 20 dB of SNR. BIB010 achieves the BER of 10 −4 at 5.3 dB of SNR. In this section we have provided the principles of modulated backscatter with regard to the fundamentals, antenna design, channel coding and modulation schemes as well as backscatter channel models. In the following, we review various designs and techniques developed for BBCSs and ABCSs.
Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> This work examines the idea of dislocating the carrier transmission from the tag-modulated carrier reception, i.e. bi-static rather than mono-static backscatter radio. In that way, more than one carrier transmitters can be distributed in a given geographical area and illuminate a set of RF tags/sensors that modulate and scatter the received carrier towards a single software-defined receiver. The increased number of carrier transmitters and their distributed nature assists tags to be potentially located closer to one carrier transmitter and thus, improves the power of the scattered signals towards the receiver. Specifically, this work a) carefully derives near-optimal detectors for bi-static backscatter radio and on/off keying (OOK) tag modulation (which is widely used in commercial tags), b) analytically calculates their bit error rate (BER) performance, and c) experimentally tests them in practice with a custom bistatic backscatter radio link. As a collateral dividend, it is shown that the non-linear processing of the proposed receivers requires certain attention on the utilized tag design principles, commonly overlooked in the literature, validating recently reported theoretical results on the microwave domain. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> For applications that require large numbers of wireless sensors spread in a field, backscatter radio can be utilized to minimize the monetary and energy cost of each sensor. Commercial backscatter systems such as those in radio frequency identification (RFID), utilize modulation designed for the bandwidth limited regime, and require medium access control (MAC) protocols for multiple access. High tag/sensor bitrate and monostatic reader architectures result in communication range reduction. In sharp contrast, sensing applications typically require the opposite: extended communication ranges that could be achieved with bitrate reduction and bistatic reader architectures. This work presents non-coherent frequency shift keying (FSK) for bistatic backscatter radio; FSK is appropriate for the power limited regime and also allows many RF tags/sensors to convey information to a central reader simultaneously with simple frequency division multiplexing (FDM). However, classic non-coherent FSK receivers are not directly applicable in bistatic backscatter radio. This work a) carefully derives the complete signal model for bistatic backscatter radio, b) describes the details of backscatter modulation with emphasis on FSK and its corresponding receiver, c) proposes techniques to overcome the difficulties introduced by the utilization of bistatic architectures, such as the carrier frequency offset (CFO), and d) presents bit error rate (BER) performance for the proposed receiver and carrier recovery techniques. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> Environmental variables monitoring with wireless sensor networks (WSNs) is invaluable for precision agriculture applications. However, the effectiveness of existing low-power, conventional (e.g., ZigBee-type) radios in large-scale deployments is limited by power consumption, cost, and complexity constraints, while the existing WSN solutions employing nonconventional, scatter-radio principles have been restricted to communication ranges of up to a few meters. In this paper, the development of a novel analog scatter-radio WSN is presented, that employs semipassive sensor/tags in bistatic topology (i.e., carrier emitter placed in a different location from the reader), consuming <;1 mW of power, with communication range exceeding 100 m. The experimental results indicate that the multipoint surface fitting calibration, in conjunction with the employed two-phase filtering process, both provide a mean absolute error of 1.9% environmental relative humidity for a temperature range of 10 °C-50 °C. In addition, the energy consumption per measurement of the proposed environmental monitoring approach can be lower than that of conventional radio WSNs. Finally, the proposed approach operational characteristics are presented through a real-world network deployment in a tomato greenhouse. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> Scatter radio achieves communication by reflection and requires low-cost and low-power RF front-ends. However, its use in wireless sensor networks (WSNs) is limited, since commercial scatter radio (e.g. RFID) offers short ranges of a few tens of meters. This work redesigns scatter radio systems and maximizes range through non-classic bistatic architectures: the carrier emitter is detached from the reader. It is shown that conventional radio receivers may show a potential 3dB performance loss, since they do not exploit the correct signal model for scatter radio links. Receivers for on-off-keying (OOK) and frequency-shift keying (FSK) that overcome the frequency offset between the carrier emitter and the reader are presented. Additionally, non-coherent designs are also offered. This work emphasizes that sensor tag design should accompany receiver design. Impact of important parameters such as the antenna structural mode are presented through bit error rate (BER) results. Experimental measurements corroborate the long-range ability of bistatic radio; ranges of up to 130 meters with 20 milliwatts of carrier power are experimentally demonstrated, with commodity software radio and no directional antennas. Therefore, bistatic scatter radio may be viewed as a key enabling technology for large-scale, low-cost and low-power WSNs. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> With low monetary cost and minimal energy consumption, communications by means of reflection and scatter radio have emerged as key enabler for low-cost, large-scale and dense ubiquitous wireless sensor network applications. This work maximizes scatter radio communication range by (a) proposing a novel coherent receiver of frequency-shift keying (FSK) modulation for the bistatic scatter radio channel (i.e., carrier emitter and receiver are dislocated) and (b) employing specific short block-length cyclic error-correcting codes. Despite the presence of three unknown channel links due to the bistatic setup and multiple unknown scatter radio-related parameters, the proposed receiver vastly improves BER performance compared to state-of-the-art bistatic scatter radio receivers. Experimental corroborating results are offered, with a commodity software-defined radio (SDR) reader, a custom scatter radio tag and omnidirectional antennas. Tag-to-reader ranges up to 150 meters are reported with as little as 20 milliWatt transmission power, offering range extension of approximately 10 additional meters compared to state-of-the-art bistatic receivers. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> Electric potential (EP) signals are produced in plants through intracellular processes, in response to external stimuli (e.g., watering, mechanical stress, light, and acquisition of nutrients). However, wireless transmission of a massive amount of biologic EP signals (from one or multiple plants) is hindered by existing battery-operated wireless technology and increased associated monetary cost. In this paper, a self-powered batteryless EP wireless sensor is presented that harvests near-maximum energy from the plant itself and transmits the EP signal tens of meters away with a single switch, based on inherently low-cost and low-power bistatic scatter radio principles. The experimental results confirm the ability of the proposed wireless plant sensor to achieve a fully autonomous operation by harvesting the energy generated by the plant itself. In addition, EP signals experimentally acquired by the proposed wireless sensor from multiple plants have been processed using nonnegative matrix factorization, demonstrating a strong correlation with environmental light irradiation intensity and plant watering. The proposed low-cost batteryless plant-as-sensor-and-battery instrumentation approach is a first but solid step toward large-scale electrophysiology studies of important socioeconomic impact in ecology, plant biology, and precision agriculture. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> This paper studies whether increased ranges of bistatic scatter radio communication are possible, especially when low-cost, embedded receivers, originally designed for conventional radio (and not for scatter radio) are employed. Wireless power transmission and bistatic scatter radio are closely related, and thus, this work aims to highlight a new exciting, potentially interesting, key-enabling research direction. It is found that for 13 dBm emitter transmission power, 246 meters scatter radio tag-to-reader distance is possible, with packet error rate (PER) less than 1%, while 268 meters are possible at the expense of increased PER, in the order of 10%. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Bistatic Backscatter Communications Systems <s> This paper studies the data scheduling and admission control problem for a backscatter sensor network (BSN). In the network, instead of initiating their own transmissions, the sensors can send their data to the gateway just by switching their antenna impedance and reflecting the received RF signals. As such, we can reduce remarkably the complexity, the power consumption, and the implementation cost of sensor nodes. Different sensors may have different functions, and data collected from each sensor may also have a different status, e.g., urgent or normal, and thus we need to take these factors into account. Therefore, in this paper, we first introduce a system model together with a mechanism in order to address the data collection and scheduling problem in the BSN. We then propose an optimization solution using the Markov decision process framework and a reinforcement learning algorithm based on the linear function approximation method, with the aim of finding the optimal data collection policy for the gateway. Through simulation results, we not only show the efficiency of the proposed solution compared with other baseline policies, but also present the analysis for data admission control policy under different classes of sensors as well as different types of data. <s> BIB009
BBCSs have been introduced for low-cost, low-power, and large-scale wireless networks. Due to the prominent characteristics, BBCSs have been adopted in many applications such as wireless sensor networks, IoT, and smart agriculture BIB001 , BIB003 , BIB007 . As shown in Fig. 5 , there are three major components in the BBCS architecture: (i) backscatter transmitters, (ii) a backscatter receiver, and (iii) a carrier emitter, i.e., RF source. Unlike monostatic configuration, e.g., RFID, where the RF source and the backscatter receiver reside on the same device, i.e., the reader BIB002 , in the bistatic systems, the carrier emitter and the backscatter receiver are physically separated. To transmit data to the backscatter receiver, the carrier emitter first transmits RF signals, which are produced by the RF oscillator, to a backscatter transmitter through the emitter's antenna which is connected to the power amplifier as shown in Fig. 5 . Then, the backscatter transmitter harvests energy from the received signals to support its internal operation functions, such as data sensing and processing. After that, under the instruction of the backscatter transmitter's controller, the carrier signals are modulated and reflected by switching the antenna impedance with different backscatter rates BIB005 through the RF impedance switch. The carrier emitter's signals and the transmitter's reflected signals are received at the antenna of the backscatter receiver and processed by the RF interface. First, the received signals are passed to the filters to recover the reflected signals from the backscatter transmitter. Then, the signals are demodulated by the demodulator and converted to bits by the converter to extract useful data. The extracted data is collected and processed by the micro-controller unit (MCU) inside the backscatter receiver. References BIB005 and BIB001 provide more comprehensive details for the standard models of bistatic systems. The BBCSs have many advantages compared with conventional wireless communications systems as follows: • Low power consumption: As the backscatter transmitters do not need to generate active RF signals, they have lower power consumption than those of conventional wireless systems. For example, a low-power backscatter transmitter, which consists of an HMC190BMS8 RF switch [108] as the front-end and an MSP430 [109] as the controller, is introduced in . The power consumptions of HMC190BMS8 and MSP430 are as little as 0.3 μW and 7.2 mW, respectively. Moreover, the backscatter transmitter used in BIB007 , consumes 10.6 μW for its operations. In BIB008 , a Silicon Laboratories SI1064 ultra-low power MCU with an integrated transceiver is used as the backscatter receiver and another SI1064 is configured as the carrier emitter. The power consumption of the SI1064 is less than 10.7 mA RX and 18 mA TX at 10 dBm of output power. These power consumptions are significantly lower than that of conventional wireless systems' components. For example, a typical commercial RFID reader, i.e., the Speedway Revolution R420 from Impinj , consumes 15 W of power for its operations. Furthermore, in wireless sensor networks, an active RF sensor, named N6841A [112] , may need as much as 30 W of power to operate, and In contrast, the price of an N6841A sensor [112] , which is a typical active RF sensor, is about $18. In RFID systems, an active RFID tag usually costs $25 up to $100, and a passive 96-bit EPC tag costs 7 to 15 cents (USD) . In addition, by using off-the-shelf devices, the prices of the carrier emitter and the backscatter receiver are significantly reduced. In , a CC2420 radio chip is used as the carrier emitter and a Texas Instruments CC2500 is utilized as the backscatter receiver. Both CC2420 and CC2500 can work at frequencies in the range of UHF. CC2420 and CC2500 cost $3.95 [117] and $1.19 , respectively, while an RFID reader costs under $100 for low-frequency models, from $200 to $300 for high-frequency models, and from $500 to $2000 for UHF models [119] . • Scalability: In the bistatic systems, as the carrier emitters are separated from the backscatter receiver and deployed close to the backscatter transmitters, the emitter-to-transmitter path loss can be significantly reduced. Therefore, with more carrier emitters in the field and the backscatter transmitters being placed around, the transmission coverage of the systems can be extended BIB004 . In addition, as bistatic backscatter radio is suitable for low-bit rate sensing applications BIB006 , the backscatter transmitter occupies a narrow bandwidth. Thus, the number of backscatter transmitters in the systems can be increased in the frequency domain BIB002 . Compared with the monostatic systems, e.g., RFID, the communication ranges and transmission rates of bistatic systems are usually greater BIB009 . However, the performances are still limited, especially compared with active radio communications systems. This is due to the fact that the backscatter transmitters are battery-less and hardware-constrained devices. Furthermore, important issues such as multiple access and energy management need to be addressed. Therefore, in the following, we review solutions to address the major challenges in the bistatic systems.
Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> This work examines the idea of dislocating the carrier transmission from the tag-modulated carrier reception, i.e. bi-static rather than mono-static backscatter radio. In that way, more than one carrier transmitters can be distributed in a given geographical area and illuminate a set of RF tags/sensors that modulate and scatter the received carrier towards a single software-defined receiver. The increased number of carrier transmitters and their distributed nature assists tags to be potentially located closer to one carrier transmitter and thus, improves the power of the scattered signals towards the receiver. Specifically, this work a) carefully derives near-optimal detectors for bi-static backscatter radio and on/off keying (OOK) tag modulation (which is widely used in commercial tags), b) analytically calculates their bit error rate (BER) performance, and c) experimentally tests them in practice with a custom bistatic backscatter radio link. As a collateral dividend, it is shown that the non-linear processing of the proposed receivers requires certain attention on the utilized tag design principles, commonly overlooked in the literature, validating recently reported theoretical results on the microwave domain. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> For applications that require large numbers of wireless sensors spread in a field, backscatter radio can be utilized to minimize the monetary and energy cost of each sensor. Commercial backscatter systems such as those in radio frequency identification (RFID), utilize modulation designed for the bandwidth limited regime, and require medium access control (MAC) protocols for multiple access. High tag/sensor bitrate and monostatic reader architectures result in communication range reduction. In sharp contrast, sensing applications typically require the opposite: extended communication ranges that could be achieved with bitrate reduction and bistatic reader architectures. This work presents non-coherent frequency shift keying (FSK) for bistatic backscatter radio; FSK is appropriate for the power limited regime and also allows many RF tags/sensors to convey information to a central reader simultaneously with simple frequency division multiplexing (FDM). However, classic non-coherent FSK receivers are not directly applicable in bistatic backscatter radio. This work a) carefully derives the complete signal model for bistatic backscatter radio, b) describes the details of backscatter modulation with emphasis on FSK and its corresponding receiver, c) proposes techniques to overcome the difficulties introduced by the utilization of bistatic architectures, such as the carrier frequency offset (CFO), and d) presents bit error rate (BER) performance for the proposed receiver and carrier recovery techniques. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> Environmental variables monitoring with wireless sensor networks (WSNs) is invaluable for precision agriculture applications. However, the effectiveness of existing low-power, conventional (e.g., ZigBee-type) radios in large-scale deployments is limited by power consumption, cost, and complexity constraints, while the existing WSN solutions employing nonconventional, scatter-radio principles have been restricted to communication ranges of up to a few meters. In this paper, the development of a novel analog scatter-radio WSN is presented, that employs semipassive sensor/tags in bistatic topology (i.e., carrier emitter placed in a different location from the reader), consuming <;1 mW of power, with communication range exceeding 100 m. The experimental results indicate that the multipoint surface fitting calibration, in conjunction with the employed two-phase filtering process, both provide a mean absolute error of 1.9% environmental relative humidity for a temperature range of 10 °C-50 °C. In addition, the energy consumption per measurement of the proposed environmental monitoring approach can be lower than that of conventional radio WSNs. Finally, the proposed approach operational characteristics are presented through a real-world network deployment in a tomato greenhouse. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> Scatter radio achieves communication by reflection and requires low-cost and low-power RF front-ends. However, its use in wireless sensor networks (WSNs) is limited, since commercial scatter radio (e.g. RFID) offers short ranges of a few tens of meters. This work redesigns scatter radio systems and maximizes range through non-classic bistatic architectures: the carrier emitter is detached from the reader. It is shown that conventional radio receivers may show a potential 3dB performance loss, since they do not exploit the correct signal model for scatter radio links. Receivers for on-off-keying (OOK) and frequency-shift keying (FSK) that overcome the frequency offset between the carrier emitter and the reader are presented. Additionally, non-coherent designs are also offered. This work emphasizes that sensor tag design should accompany receiver design. Impact of important parameters such as the antenna structural mode are presented through bit error rate (BER) results. Experimental measurements corroborate the long-range ability of bistatic radio; ranges of up to 130 meters with 20 milliwatts of carrier power are experimentally demonstrated, with commodity software radio and no directional antennas. Therefore, bistatic scatter radio may be viewed as a key enabling technology for large-scale, low-cost and low-power WSNs. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> This work offers concrete, low-complexity (small codeword length) channel coding for the bistatic scatter radio channel, complementing the uncoded setup of recent work. The theoretical design is experimentally validated with a commodity software-defined radio (SDR) reader; tag-to-reader ranges up to 134 meters are demonstrated with 13 dBm emitter power, while bit error rate (BER) is reduced or range is increased, on the order of 10 additional meters (or more) compared to the uncoded case, with linear encoding at the tag/sensor and simple decoding at the reader. Even though designing low-complexity channel coding schemes is a challenging problem, this work offers a concrete solution that could accelerate the adoption of scatter radio for large-scale wireless sensor networks, i.e. backscatter sensor networks. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> The measurement of the soil moisture is critical in agriculture. In this work a joint analog design of wireless transmitter with scatter radio and capacitive sensor for soil moisture is presented. The design is based on a custom microstrip capacitor, exploits bistatic analog scatter radio principles and is able to wirelessly convey soil moisture percentage by mass (% MP) with RMS error of 1.9%, power consumption and communication range on the order of 100 uWatts and 100 meters, respectively. It is tailored for ultra-low cost (5 Euro per sensor) agricultural sensor network applications for soil moisture. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> With low monetary cost and minimal energy consumption, communications by means of reflection and scatter radio have emerged as key enabler for low-cost, large-scale and dense ubiquitous wireless sensor network applications. This work maximizes scatter radio communication range by (a) proposing a novel coherent receiver of frequency-shift keying (FSK) modulation for the bistatic scatter radio channel (i.e., carrier emitter and receiver are dislocated) and (b) employing specific short block-length cyclic error-correcting codes. Despite the presence of three unknown channel links due to the bistatic setup and multiple unknown scatter radio-related parameters, the proposed receiver vastly improves BER performance compared to state-of-the-art bistatic scatter radio receivers. Experimental corroborating results are offered, with a commodity software-defined radio (SDR) reader, a custom scatter radio tag and omnidirectional antennas. Tag-to-reader ranges up to 150 meters are reported with as little as 20 milliWatt transmission power, offering range extension of approximately 10 additional meters compared to state-of-the-art bistatic receivers. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> Scatter radio is a promising enabling technology for ultra-low power consumption and low monetary cost, largescale wireless sensor networks. The two most prominent scatter radio architectures, namely the monostatic and the bistatic, are compared. Comparison metrics include bit error probability under maximum-likelihood detection for the single-user case and outage probability for the multi-user case (including tight bounds). This work concretely shows that the bistatic architecture improves coverage and system reliability. Utilizing this fact, a bistatic, digital scatter radio sensor network, perhaps the first of its kind, using frequency-shift keying (FSK) modulation and access, is implemented and demonstrated. <s> BIB009 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> A low-cost (6 Euro per sensor), low-power (in the order of $200~\mu \text{W}$ per sensor), with high communication range (on the order of 250 m), scatter radio sensor network is presented, for soil moisture monitoring at multiple locations. The proposed network utilizes analog frequency modulation in a bistatic network architecture (i.e., the emitter and reader are not colocated), while the sensors operate simultaneously, using frequency-division multiple access. In contrast to prior art, this paper utilizes an ultralow-cost software-defined radio reader and offers custom microstrip capacitive sensing with simple calibration, as well as modulation pulses for each scatter radio sensor with 50% duty cycle; the latter is necessary for scalable network designs. The overall root mean squared error below 1% is observed, even for the range of 250 m. This is another small (but concrete) step for the adoption of scatter radio technology as a key enabling technology for scalable, large-scale, low-power, and cost environmental sensor networking. <s> BIB010 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> This paper studies whether increased ranges of bistatic scatter radio communication are possible, especially when low-cost, embedded receivers, originally designed for conventional radio (and not for scatter radio) are employed. Wireless power transmission and bistatic scatter radio are closely related, and thus, this work aims to highlight a new exciting, potentially interesting, key-enabling research direction. It is found that for 13 dBm emitter transmission power, 246 meters scatter radio tag-to-reader distance is possible, with packet error rate (PER) less than 1%, while 268 meters are possible at the expense of increased PER, in the order of 10%. <s> BIB011 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Bistatic Backscatter Communications Systems <s> Scatter radio, i.e., communication by means of reflection, has been recently proposed as a promising technology for low-power wireless sensor networks (WSNs). Specifically, this paper offers noncoherent receivers in scatter radio frequency-shift keying, for either channel-coded or uncoded scatter radio reception, in order to eliminate the need for training bits of coherent schemes (for channel estimation) at the packet preamble. Noncoherent symbol-by-symbol and sequence detectors based on hybrid composite hypothesis test (HCHT) and generalized likelihood-ratio test, for the uncoded case and noncoherent decoders based on HCHT, for small block-length channel codes, are derived. Performance comparison under Rician, Rayleigh, or no fading, taking into account fixed energy budget per packet is presented. It is shown that the performance gap between coherent and noncoherent reception depends on whether channel codes are employed, the fading conditions (e.g., Rayleigh versus Rician versus no fading), as well as the utilized coding interleaving depth; the choice of one coding scheme over the other depends on the wireless fading parameters and the design choice for extra diversity versus extra power gain. Finally, experimental outdoor results at 13-dBm transmission power corroborate the practicality of the proposed noncoherent detection and decoding techniques for scatter radio WSNs. <s> BIB012
1) Communication Improvement: As mentioned above, a bistatic backscatter offers better communication range and transmission rate than those of MBCSs. Nevertheless, the performances need to be further improved to meet requirements of future wireless systems and their applications. Kimionis et al. BIB001 propose a backscatter receiver design to increase the communication range of BBCSs. One of the most important findings in this paper is that the time-varying carrier frequency offset (CFO) between a carrier emitter and a backscatter receiver can significantly reduce the communication range. The CFO often occurs when the local oscillator at the backscatter receiver does not synchronize with the carrier signals, i.e., oscillator inaccuracy. Thus, the authors eliminate the CFO by passing the received signals to an absolute operator at the backscatter receiver. This absolute operator can divide the received signals into noiseless and noise signals. Then, the backscatter receiver observes the amplitude of the noiseless signals which take two distinct values according to the binary modulation performed by the backscatter transmitter, and thus the CFO is removed. Furthermore, the near-optimal detectors are adopted in order to improve the BER performance, which also increases the transmitter-toreceiver distance. The experimental results show that the proposed backscatter receiver design can extend the communication range up to 60 meters at 1 kbps and 30 dBm emitter power in an outdoor environment. Kampianakis et al. BIB003 propose a data smoothing technique including two phases of filtering to increase the communication range. The first phase adopts the histogram filtering process that calculates the histogram of collected data and derives these data with the highest occurrence in a certain range. Then, the Savitzky-Golay filtering process is implemented in the second phase to exploit least-squares data smoothing on the measurements. The proposed two-phase filtering can significantly reduce errors that may occur in the transmission, and thus the signal-to-noise ratio (SNR) at the backscatter receiver can be increased. The experimental results show that the proposed technique can extend the communication range up to 100 meters with less than 1 mW of emitter power at 868 MHz and 100 kbps bitrate. The study in BIB004 introduces a system model for BBCSs taking into account the important microwave parameters such as CFO, BER, and SNR, which impact the transmitter-to-receiver communication performance. Kimionis et al. BIB004 then design a non-conventional backscatter radio system architecture with a CFO compensation block and noncoherent detectors. It is shown that the proposed architecture can increase the communication range up to 130 meters at 13 dBm emitter power by using the FSK modulation scheme and 1 kbps bitrate. Alevizos et al. BIB005 indicate that employing channel coding can increase the communication range and the reliability of BBCSs. To do so, the codeword needs to be simple so that the backscatter transmitter and the backscatter receiver with limited power can process. Thus, the Reed-Mullers code is adopted because of its small length and sufficient error correction capability. Moreover, the authors show that the BBCSs can suffer from the interleaving of carrier-to-transmitter and transmitter-to-receiver channels, which results in the reduction of the communication range. To solve this problem, an interleaving technique is employed in conjunction with the block codes. The key idea is that the backscatter transmitter stores a block of codewords and transmits bits in the block in sequence. As a result, the burst errors affect bits of different codewords rather than bits of the same codeword. From the experimental results, the transmitter-to-receiver communication range can be extended up to 134 meters with 13 dBm emitter power and 1 kbps bitrate. However, the interleaving technique incurs delay and requires additional memory at both the backscatter transmitter and the backscatter receiver. Therefore, Fasarakis-Hilliard et al. BIB007 propose a more sophisticated method based on short block-length cyclic channel codes, named interleaved code, to reduce the memory requirements. Specifically, the authors extend the work in BIB004 by developing coherent detectors to estimate unknown parameters of channel and microwave such as CFO and interleaving. Both the simulation and experimental results show that the proposed solution can achieve the communication range of 150 meters with 20 mW emitter power and 1 kbps bitrate. Varshney et al. introduce a backscatter receiver, named LOREA, to increase the communication range of BBCSs. To achieve this, LOREA decouples the backscatter receiver from the carrier emitter in frequency and space domains by (i) using different frequencies for the carrier emitter and the backscatter receiver and (ii) locating them in different devices. Therefore, the self-interference can be significantly reduced. In addition, LOREA uses 2.4 GHz Industrial Scientific Medical (ISM) band for transmissions, which enables to utilize the signals from other devices, such as sensor nodes and Wi-Fi devices, for carrier signals. By applying LOREA, the communication range can be extended to 225 meters at 26 dBm emitter power and 2.6 kbps bitrate in line-of-sight (LOS) scenarios. Daskalakis et al. BIB010 indicate that the backscattered power at the backscatter transmitter will be increased when the duty-cycle approaches 50% BIB006 , . The authors also note that square waves having the duty cycle different from 50% occupy additional bandwidth. To achieve 50% duty cycle, a backscatter sensor transmitter with an analog switch and a resistor (R 2 ) in the circuit is proposed. The duty cycle of the produced signals is calculated by R 2 2R 2 = 50%. The experiments demonstrate that the communication range is significantly extended up to 250 meters with the sampling rate of 1 MHz and 13 dBm carrier emitter power. Vougioukas et al. BIB011 introduce a backscatter transmitter circuit based on the Arduino development board, which uses a bit vector to form a packet consisting of 8-byte preamble, 4byte sync, and 6-byte data. This packet is modulated by BFSK modulation and sent to the backscatter receiver. By selecting a long stream of preamble/sync bytes, the authors can minimize the effects of noise at the backscatter receiver. This leads to the reduction of packet error rate, and thus the communication range can be increased. To extract data in the packets from the backscatter transmitters, at the backscatter receiver, the authors deploy a Silicon Laboratories SI1064 ultra-low power MCU and an embedded TI 1101. The SI1064 MCU is integrated with a transceiver and configured to receive BFSKmodulated signals which are reflected from the backscatter transmitters. The TI 1101 is tested to verify the reception of these signals. Then, a prototype is implemented and based on the topology as shown in Fig. 6(a) . The carrier emitter produces RF signals at 868 MHz with 13 dBm of power, and the emitter-to-transmitter distance d et is 3 meters. The backscatter transmitter modulates data at 1.2 kbps using FSK modulation. The experimental results show that the communication range between the backscatter transmitter and the backscatter receiver d tr can be extended up to 268 meters. 2) Multiple Access: In the bistatic systems, the backscatter receiver may receive reflected signals from multiple backscatter transmitters simultaneously. Therefore, controlling the interference/collision among received signals is a challenge. There are several solutions in the literature to deal with this problem. FSK and On-off-keying (OOK) are the commonly used modulation schemes in BBCSs. Although FSK requires extra processing for CFO estimation compared with OOK, it outperforms OOK in terms of the BER performance BIB004 . Furthermore, FSK and frequency-division multiplexing (FDM) are suitable for BBCSs. With FDM, since the reserved bandwidth for each backscatter transmitter is narrow, with a given frequency band, many backscatter transmitters can operate simultaneously. As the sub-carrier frequency reserved for each backscatter transmitter is unique, the collisions among the backscatter transmitters are eliminated BIB004 . As a result, a majority of models in BIB002 , BIB004 , BIB012 , BIB007 , BIB003 , and BIB010 , use FDM as a multiple access scheme. Kampianakis et al. BIB003 introduce an expression to estimate the operating sub-carrier frequency of a backscatter sensor transmitter for environment (humidity) monitoring applications. This expression is calculated by using the values of resistors and capacitors in the backscatter transmitter circuitry, i.e., a resistor-capacitor network. All backscatter sensor transmitters have different resistor-capacitor networks. Therefore, the center frequency and the spectrum band of each backscatter Fig. 6 . Experiment setup for measuring (a) communication range BIB011 , and (b) multiple access BIB003 . sensor transmitter are unique. By applying this expression, appropriate values of resistor-capacitor components can be chosen and the frequency for each individual transmitter can be calculated. To demonstrate the efficiency of the FDM scheme, the authors deploy the bistatic backscatter sensor system in a green house based on the topology as shown in Fig. 6(b) . The system consists of 10 environmental relative humidity backscatter sensor transmitters, and the topology of these transmitters is presented in Fig. 6b . All the transmitters utilize different resistor-capacitor components in order to apply the FDM scheme as discussed above. To extend the coverage of the system, two carrier emitters with 20 mW emitter power are used. The experimental results show that the backscatter sensor transmitters are able to communicate with the backscatter receiver in a collision-free manner. Similar to BIB003 , Daskalakis et al. BIB010 also define an expression to estimate the sub-carrier frequency for the backscatter sensor transmitters. However, the authors indicate that the outdoor temperature variations affect the circuit operation of the backscatter sensor transmitters, and thus the reserved sub-carrier frequency for each backscatter sensor transmitter may be drifted. Thus, in practice, the reserved bandwidth for each backscatter sensor transmitter is increased, and the number of backscatter sensor transmitters working on a given spectrum band will be reduced. It is also noted that the trade-off between scalability, i.e., the number of simultaneously operating backscatter sensor transmitters, and the environmental parameters for FDM scheme should be also analyzed. In some cases, multiple BBCSs operate simultaneously at the same location, which can cause serious interference and reduce the performance of the whole network. To address this issue, the authors in BIB008 , BIB009 , and BIB003 adopt time-division multiplexing (TDM) to ensure that in each time frame there is only one active carrier emitter. In a single time frame, the carrier emitter transmits the carrier signals, based on which a certain backscatter transmitter backscatters its data to the backscatter receiver. Hence, the interference among emitters and the transmitters in the network is eliminated.
Ambient Backscatter Communications: A Contemporary Survey <s> 3) Energy Consumption Reduction: <s> We introduce inter-technology backscatter, a novel approach that transforms wireless transmissions from one technology to another, on the air. Specifically, we show for the first time that Bluetooth transmissions can be used to create Wi-Fi and ZigBee-compatible signals using backscatter communication. Since Bluetooth, Wi-Fi and ZigBee radios are widely available, this approach enables a backscatter design that works using only commodity devices. ::: We build prototype backscatter hardware using an FPGA and experiment with various Wi-Fi, Bluetooth and ZigBee devices. Our experiments show we can create 2-11 Mbps Wi-Fi standards-compliant signals by backscattering Bluetooth transmissions. To show the generality of our approach, we also demonstrate generation of standards-complaint ZigBee signals by backscattering Bluetooth transmissions. Finally, we build proof-of-concepts for previously infeasible applications including the first contact lens form-factor antenna prototype and an implantable neural recording interface that communicate directly with commodity devices such as smartphones and watches, thus enabling the vision of Internet connected implanted devices. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> 3) Energy Consumption Reduction: <s> Electric potential (EP) signals are produced in plants through intracellular processes, in response to external stimuli (e.g., watering, mechanical stress, light, and acquisition of nutrients). However, wireless transmission of a massive amount of biologic EP signals (from one or multiple plants) is hindered by existing battery-operated wireless technology and increased associated monetary cost. In this paper, a self-powered batteryless EP wireless sensor is presented that harvests near-maximum energy from the plant itself and transmits the EP signal tens of meters away with a single switch, based on inherently low-cost and low-power bistatic scatter radio principles. The experimental results confirm the ability of the proposed wireless plant sensor to achieve a fully autonomous operation by harvesting the energy generated by the plant itself. In addition, EP signals experimentally acquired by the proposed wireless sensor from multiple plants have been processed using nonnegative matrix factorization, demonstrating a strong correlation with environmental light irradiation intensity and plant watering. The proposed low-cost batteryless plant-as-sensor-and-battery instrumentation approach is a first but solid step toward large-scale electrophysiology studies of important socioeconomic impact in ecology, plant biology, and precision agriculture. <s> BIB002
The backscatter transmitters in BBCSs use energy harvested from the surrounding environment for their internal operations such as modulating and transmitting. However, the amount of the harvested energy is typically small. Therefore, several designs are proposed to use energy efficiently in BBCSs. Iyer et al. BIB001 design a backscatter transmitter in order to reduce the power consumption of BBCSs. The key idea is using 65 nm low power complementary metal-oxide-semiconductor (CMOS) technology which enables the backscatter transmitter to consume very small amount of energy in an idle state. The experimental results demonstrate that the power consumption of the backscatter transmitter is as little as 28 μW. Similarly, Varshney et al. introduce a backscatter transmitter design which consists of several low-power components such as MSP430 [109] for generating baseband signals and HMC190BMS8 RF switch [108] for the backscatter front-end. Under this design, the backscatter transmitter consumes only 7.2 mW of power as shown in the experimental results. Konstantopoulos et al. BIB002 propose a low-power backscatter sensor transmitter which can harvest energy from both the carrier emitter and the plant in the field. The authors note that the plant power-voltage characteristic varies in the range of 0.52-0.67 V, depending on the solar radiation and the ambient environmental temperature. This potential energy can be used to support internal operations of the backscatter sensor transmitter. Thus, an energy storage capacitor is employed to accumulate the biologic energy of the plant through a charging/discharging process. During the charging period, the operations of the backscatter sensor transmitter are suspended, and the biologic energy is harvested and stored in the capacitor. After the capacitor accumulates sufficient energy, the backscatter sensor transmitter is reactivated during the discharging period. The charging/discharging process is repeated constantly based on the transmission time interval of the backscatter sensor transmitter, which is controlled by a power management unit. The experimental results show that the proposed backscatter sensor transmitter consumes around 10.6 μW of power, and the harvested energy from the plant and the carrier emitter is sufficient for the operations. The authors also demonstrate that the capacitor of the backscatter sensor transmitter can have almost 0.7 V of biologic power after 1200 seconds of the charging time.
Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> This paper presents the wireless identification and sensing platform (WISP), which is a programmable battery-free sensing and computational platform designed to explore sensor-enhanced radio frequency identification (RFID) applications. WISP uses a 16-bit ultralow-power microcontroller to perform sensing and computation while exclusively operating from harvested RF energy. Sensors that have successfully been integrated into the WISP platform to date include temperature, ambient light, rectified voltage, and orientation. The microcontroller encodes measurements into an electronic product code (EPC) Class 1 Generation 1 compliant ID and dynamically computes the required 16-bit cyclical redundancy checking (CRC). Finally, WISP emulates the EPC protocol to communicate the ID to the RFID reader. To the authors' knowledge, WISP is the first fully programmable computing platform that can operate using power transmitted from a long-range (UHF) RFID reader and communicate arbitrary multibit data in a single response packet. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> Radio frequency (RF) energy transfer and harvesting techniques have recently become alternative methods to power the next generation wireless networks. As this emerging technology enables proactive energy replenishment of wireless devices, it is advantageous in supporting applications with quality of service (QoS) requirement. In this paper, we present an extensive literature review on the research progresses in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs including system architecture, RF energy harvesting techniques and existing applications. Then, we present the background in circuit design as well as the state-of-the-art circuitry implementations, and review the communication protocols specially designed for RF-EHNs. We also explore various key design issues in the development of RF-EHNs according to the network types, i.e., single-hop network, multi-antenna network, relay network and cognitive radio network. Finally, we envision some open research directions. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> We present BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter devices and WiFi APs using ambient WiFi transmissions as the excitation signal. Specifically, we show that it is possible to design devices and WiFi APs such that the WiFi AP in the process of transmitting data to normal WiFi clients can decode backscatter signals which the devices generate by modulating information on to the ambient WiFi transmission. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system [27,25]. BackFi design is energy efficient, as it relies on backscattering alone and needs insignificant power, hence the energy consumed per bit is small. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> Ambient backscatter communication, where energy and wireless carrier are extracted from existing radio signals, are very attractive to the Internet of Things. This technology is emerging as an enabler for battery-less sensor nodes that can operate unattended for extended periods of time. Their capacity to operate without maintenance make them attractive for operation in situations where nodes might not be easily accessible. My research will help turn this vision into reality by advancing key areas that remain unexplored in this field. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> In this paper, we assess the information-theoretic performance of a point-to-point link that exploits ambient backscattering to support green Internet-of-Thing (IoT) communications. In this framework, an IoT passive device transmits its information by reusing ambient radio-frequency signals emitted by an existing or legacy multicarrier communication system. After introducing the signal model of the relevant communication links, the information-theoretic capacity of both the legacy and backscatter systems is derived. It is found that, under reasonable operative conditions, the legacy system can turn the RF interference arising from backscattering into a form of multipath diversity, which can be exploited to increase its own performance. Moreover, it is shown that, even when it employs simple single-carrier modulation techniques, the backscatter system attains significant data rates over relatively short distances. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> Ambient backscatter technology that utilizes the ambient radio frequency signals to enable the communications of battery-free devices has attracted much attention recently. In this paper, we study the problem of signal detection for an ambient backscatter communication system that adopts the differential encoding to eliminate the necessity of channel estimation. Specifically, we formulate a new transmission model, design the data detection algorithm, and derive two closed-form detection thresholds. One threshold is used to approximately achieve the minimum sum bit error rate (BER), while the other yields balanced error probabilities for “0” bit and “1” bit. The corresponding BER expressions are derived to fully characterize the detection performance. In addition, the lower and the upper bounds of BER at high signal-to-noise ratio regions are also examined to simplify a performance analysis. Simulation results are then provided to corroborate the theoretical studies. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> This paper enables connectivity on everyday objects by transforming them into FM radio stations. To do this, we show for the first time that ambient FM radio signals can be used as a signal source for backscatter communication. Our design creates backscatter transmissions that can be decoded on any FM receiver including those in cars and smartphones. This enables us to achieve a previously infeasible capability: backscattering information to cars and smartphones in outdoor environments. ::: Our key innovation is a modulation technique that transforms backscatter, which is a multiplication operation on RF signals, into an addition operation on the audio signals output by FM receivers. This enables us to embed both digital data as well as arbitrary audio into ambient analog FM radio signals. We build prototype hardware of our design and successfully embed audio transmissions over ambient FM signals. Further, we achieve data rates of up to 3.2 kbps and ranges of 5-60 feet, while consuming as little as 11.07{\mu}W of power. To demonstrate the potential of our design, we also fabricate our prototype on a cotton t-shirt by machine sewing patterns of a conductive thread to create a smart fabric that can transmit data to a smartphone. We also embed FM antennas into posters and billboards and show that they can communicate with FM receivers in cars and smartphones. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Overview of Ambient Backscatter Communications Systems 1) Definition and Architecture: <s> This paper introduces a new solution to improve the performance for secondary systems in radio frequency (RF) powered cognitive radio networks (CRNs). In a conventional RF-powered CRN, the secondary system works based on the harvest-then-transmit protocol. That is, the secondary transmitter (ST) harvests energy from primary signals and then uses the harvested energy to transmit data to its secondary receiver (SR). However, with this protocol, the performance of the secondary system is much dependent on the amount of harvested energy as well as the primary channel activity, e.g., idle and busy periods. Recently, ambient backscatter communication has been introduced, which enables the ST to transmit data to the SR by backscattering ambient signals. Therefore, it is potential to be adopted in the RF-powered CRN. We investigate the performance of RF-powered CRNs with ambient backscatter communication over two scenarios, i.e., overlay and underlay CRNs. For each scenario, we formulate and solve the optimization problem to maximize the overall transmission rate of the secondary system. Numerical results show that by incorporating such two techniques, the performance of the secondary system can be improved significantly compared with the case when the ST performs either harvest-then-transmit or ambient backscatter technique. <s> BIB008
The first ABCS is introduced in , and it has quickly become an effective communication solution which can be adopted in many wireless applications and systems. Unlike BBCSs, ABCSs allow backscatter transmitters to communicate by using signals from ambient RF sources, e.g., TV towers, cellular and FM base stations, and Wi-Fi APs. As an enabler for device-to-device (D2D) communications, ABCSs have received a lot of attention from both academia and industry , BIB004 . As shown in Fig. 7 , a general ABCS architecture consists of three major components: (i) RF sources, (ii) ambient backscatter transmitters, and (iii) ambient backscatter receivers. The ambient backscatter transmitter and receiver can be co-located and known as a transceiver. The ambient RF sources can be divided into two types, i.e., static and dynamic ambient RF sources BIB002 . Table VII summarizes the transmit power and RF source-to-transmitter distance of some RF sources. • Static ambient RF sources: Static ambient RF sources are the sources which transmit RF signals constantly, e.g., TV towers and FM base stations. The transmit powers of these RF sources are usually high, e.g., up to 1 MW for TV towers BIB002 . The transmitter-to-RF source distance can vary from several hundred meters to several kilometers , BIB007 . • Dynamic ambient RF sources: Dynamic ambient RF sources are the sources which operate periodically or randomly with typically lower transmit power, e.g., Wi-Fi AP. The transmitter-to-RF source distance is often very short, e.g., 1-5 meters BIB003 . 2) Ambient Backscatter Design: Liu et al. design an ambient backscatter transmitter which can act as a transceiver as shown in Fig. 7 . A transceiver consists of three main components: (i) the harvester, (ii) backscatter transmitter, and (iii) the backscatter receiver. The components are all connected to the same antenna. To transmit data, the harvester extracts energy from ambient RF signals to supply energy for the backscatter transceiver A. Then, by modulating and reflecting the ambient RF signals, the backscatter transceiver A can send data to backscatter transceiver B. To do so, backscatter transceiver A uses a switch which consists of a transistor connected to the antenna. The input of the backscatter transceiver A is a stream of one and zero bits. When the input bit is zero, the transistor is off, and thus the backscatter transceiver A is in the non-reflecting state. Otherwise, when the input bit is one, the transistor is on, and thus the backscatter transceiver A is in the reflecting state. As such, backscatter transceiver A is able to transfer bits to backscatter transceiver B. Clearly, backscatter transceiver B can also send data to backscatter transceiver A in the same way. In ABCSs, to extract data transferred from the ambient backscatter transmitter, an averaging mechanism is adopted at the ambient backscatter receiver. The main idea of the averaging mechanism is that the backscatter receiver can separate the ambient RF signals and the backscattered signals if the bitrates of these signals are significantly different. Therefore, the backscatter transmitter transmits the backscattered signals at a lower frequency than that of the ambient RF signals, and hence adjacent samples in the ambient RF signals are more likely uncorrelated than adjacent samples in the backscattered signals. As such, the backscatter receiver can remove the variations in the ambient RF signals while the variations in the backscattered signals remain. The backscatter receiver can decode data in the backscattered signals by using two average power levels of the ambient and backscattered signals. It is important to note that the inputs of the averaging mechanism are digital samples. Hence, another challenge when designing the backscatter receiver is how to decode backscattered data without using an analog-to-digital converter which consumes a significant amount of energy. Liu et al. thus design a demodulator as shown in Fig. 3(b) . First, at the receiver, the received signals are smoothed by an envelope circuit. Then, a threshold between the voltage levels of zero and one bits is computed by a compute-threshold circuit. After that, the comparator compares the average envelope signals with a predefined threshold to generate output bits. 3) Advantages and Limitations: In ABCSs, as the backscatter transmitters can be designed with low-cost and low-power components, the system costs as well as system power consumption can be significantly lowered . For example, the ambient backscatter transceivers in include several analog components such as MSP430 [109] as a micro-controller and ADG902 as an RF switch. The power consumption of the analog components of this transceiver is as low as 0.25 μW TX and 0.54 μW RX, while the analog components of a traditional backscatter system, i.e., Wireless Identification and Sensing Platform (WISP) BIB001 , consume 2.32 μW TX and 18 μW RX. Furthermore, by using ambient RF signals, there is virtually no cost for deploying and maintaining RF sources, e.g., carrier emitters in BBCSs and readers in RFID systems. ABCSs also enable ubiquitous computing and allow direct D2D and multi-hop communications BIB005 , BIB006 . Moreover, backscatter transmitters in ABCSs only modulate and reflect existing signals rather than actively transmit signals in the licensed spectrum. Consequently, their interference to the licensed users is almost negligible. Therefore, the ABCSs can be considered to be legal under current spectrum usage policies , and they do not require dedicated frequency spectrum to operate, thereby saving system cost further. Nevertheless, ABCSs have some limitations. ABCSs can be affected by the strong direct interference from the ambient RF sources to the backscatter receivers, in which the same RF spectrum is shared. Furthermore, backscatter transmitters use ambient RF signals for circuit operation and data transmission, and thus they are typically not possible to control the RF sources in terms of quality-of-service such as transmit power, scheduling, and frequencies. In addition, ABCSs may potentially face several security issues since the backscatter transmitters are simple devices and the RF sources are not controllable. Moreover, as the harvested energy from the ambient RF signals is usually small BIB008 , and these signals can be affected by fading and noise on the communication channels, the bitrate and communication range between the backscatter transmitters of ABCSs are limited. In the following, we review existing solutions to address the aforementioned limitations of ABCSs.
Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> This paper provides analyses of three types of diversity combining systems in practical use. These are: selection diversity, maximal-ratio diversity, and equal-gain diversity systems. Quantitative measures of the relative performance (under realistic conditions) of the three systems are provided. The effects of various departures from ideal conditions, such as non-Rayleigh fading and partially coherent signal or noise voltages, are considered. Some discussion is also included of the relative merits of predetection and postdetection combining and of the problems in determining and using long-term distributions. The principal results are given in graphs and tables, useful in system design. It is seen that the simplest possible combiner, the equal-gain system, will generally yield performance essentially equivalent to the maximum obtainable from any quasi-linear system. The principal application of the results is to diversity communication systems and the discussion is set in that context, but many of the results are also applicable to certain radar and navigation systems. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Passive Bistatic Radar exploits the illumination of a scene by a local communications transmitter in order to perform radar processing without dedicated transmitter hardware. The direct transmission and strong stationary clutter are often present in the surveillance signal, reducing dynamic range and masking returns from targets. In this study various adaptive filters are used to estimate the direct path and clutter components and cancel them from the signal. Performance metrics particular to radar processing are defined, and the investigated filters are evaluated by application to Passive Bistatic Radar with real DVB-T data. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> This paper provides a detailed overview of the Digital Video Broadcasting Terrestrial (DVB-T) signal structure and the implications for passive radar systems that use these signals as illuminators of opportunity. In particular, we analyze the ambiguity function and explain its delay and Doppler properties in terms of the underlying structure of the DVB-T signal. Of particular concern for radar range-Doppler processing are ambiguities consistent in range and Doppler with targets of interest. In this paper we adopt a mismatched filtering approach for range-Doppler processing. We also recognize that while the structure of the DVB-T signal introduces ambiguities, the structure can also be exploited to better estimate the transmitted signal and channel, as well as any mismatch between transmitter and receiver (e.g., clock offsets). This study presents a scheme for pre-processing both the reference and surveillance signals obtained by the passive radar to mitigate the effects of the ambiguities and the clutter in range-Doppler processing. The effectiveness of our proposed scheme in enhancing target detection is demonstrated using real-world data from an (Australian) 8k-mode DVB-T system. A 29 dB reduction in residual ambiguity levels over existing techniques is observed, and a 36 dB reduction over standard matched filtering; with only a 1 dB reduction in the zero-delay, zero-Doppler peak. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> A method of communications is presented based on the backscatter modulation of terrestrial digital television signals by low-complexity tags. The modulation is sensed by receivers implementing passive coherent detection algorithms similar to those used in passive radar. This method enables shared use of the UHF television band for low-data-rate applications. Analyses and experiments suggest the feasibility of this technique but also highlight the unique challenges for designing such a system. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> This paper introduces the first design that enables full-duplex communication on battery-free backscatter devices. Specifically, it gives receivers a way to provide low-rate feedback to the transmitter on the same frequency as that of the backscatter transmissions, using neither multiple antennas nor power-consuming cancellation hardware. Our design achieves this goal using only fully-passive analog components that consume near-zero power. We integrate our design with the backscatter network stack and demonstrate that it can minimize energy wastes that occur due to collisions and also correct for errors and changes in channel conditions at a granularity smaller than that of a packet. To show the feasibility of our design, we build a hardware prototype using off-the-shelf analog components. Our evaluation shows that our design cancels the self-interference down to the noise floor, while consuming only 0.25 μW and 0.54 μW of transmit and receive power, respectively. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> We present BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter devices and WiFi APs using ambient WiFi transmissions as the excitation signal. Specifically, we show that it is possible to design devices and WiFi APs such that the WiFi AP in the process of transmitting data to normal WiFi clients can decode backscatter signals which the devices generate by modulating information on to the ambient WiFi transmission. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system [27,25]. BackFi design is energy efficient, as it relies on backscattering alone and needs insignificant power, hence the energy consumed per bit is small. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> In this paper, we look at making backscatter practical for ultra-low power on-body sensors by leveraging radios on existing smartphones and wearables (e.g. WiFi and Bluetooth). The difficulty lies in the fact that in order to extract the weak backscattered signal, the system needs to deal with self-interference from the wireless carrier (WiFi or Bluetooth) without relying on built-in capability to cancel or reject the carrier interference. Frequency-shifted backscatter (or FS-Backscatter) is based on a novel idea --- the backscatter tag shifts the carrier signal to an adjacent non-overlapping frequency band (i.e. adjacent WiFi or Bluetooth band) and isolates the spectrum of the backscattered signal from the spectrum of the primary signal to enable more robust decoding. We show that this enables communication of up to 4.8 meters using commercial WiFi and Bluetooth radios as the carrier generator and receiver. We also show that we can support a range of bitrates using packet-level and bit-level decoding methods. We build on this idea and show that we can also leverage multiple radios typically present on mobile and wearable devices to construct multi-carrier or multi-receiver scenarios to improve robustness. Finally, we also address the problem of designing an ultra-low power tag that can frequency shift by 20MHz while consuming tens of micro-watts. Our results show that FS-Backscatter is practical in typical mobile and static on-body sensing scenarios while only using commodity radios and antennas. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Backscatter wireless communications have exceptionally stringent power constraints. This is particularly true for ambient backscatter systems, where energy and wireless carrier are both extracted from weak existing radio signals. The tight power constraints make it difficult to implement advanced coding techniques like spread spectrum, even though such techniques are effective in increasing the communication range and robustness in this type of systems. We draw inspiration from ?code, a previous backscatter coding approach, where data bits are encoded in single-data-bit chip sequences of considerable length to gain robustness. We introduce a new coding technique that encodes several bits in a single symbol in order to increase the data rate of ambient backscatter, while maintaining an acceptable compromise with robustness. We study the proposed technique by means of simulations and characterize the bit error rate and data rate dependencies. A comparison with ?code is drawn and the benefits of each approach are analyzed in search for the best strategy for increasing data rate while maintaining robustness to noise. <s> BIB009 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Future Internet-of-Things (IoT) is expected to wirelessly connect billions of low-complexity devices. For wireless information transfer (WIT) in IoT, high density of IoT devices and their ad hoc communication result in strong interference which acts as a bottleneck on WIT. Furthermore, battery replacement for the massive number of IoT devices is difficult if not infeasible, making wireless energy transfer (WET) desirable. This motivates: (i) the design of full-duplex WIT to reduce latency and enable efficient spectrum utilization, and (ii) the implementation of passive IoT devices using backscatter antennas that enable WET from one device (reader) to another (tag). However, the resultant increase in the density of simultaneous links exacerbates the interference issue. This issue is addressed in this paper by proposing the design of full-duplex backscatter communication (BackCom) networks, where a novel multiple-access scheme based on time-hopping spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT in coexisting backscatter reader-tag links. Comprehensive performance analysis of BackCom networks is presented in this paper, including forward/backward bit-error rates and WET efficiency and outage probabilities, which accounts for energy harvesting at tags, non-coherent and coherent detection at tags and readers, respectively, and the effects of asynchronous transmissions. <s> BIB010 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> In this paper, we investigate a unique phase cancellation problem that occurs in backscatter-based tag-to-tag (BBTT) communication systems. These are systems wherein two or more radio-less devices (tags) communicate with each other purely by reflecting (backscattering) an external signal (whether ambient or intentionally generated). A transmitting tag modulates baseband information onto the reflected signal using backscatter modulation. At the receiving tag, the backscattered signal is superimposed to the external excitation and the resulting signal is demodulated using envelope detection techniques. The relative phase difference between the backscatter signal and the external excitation signal at the receiving tag has a large impact on the envelope of the resulting signal. This often causes a complete cancellation of the baseband information contained in the envelope, and it results in a loss of communication between the two tags. This problem is ubiquitous in all BBTT systems and greatly impacts the reliability, robustness, and communication range of such systems. We theoretically analyze and experimentally demonstrate this problem for devices that use both ASK and PSK backscattering. We then present a solution to the problem based on the design of a new backscatter modulator for tags that enables multiphase backscattering. We also propose a new combination method that can further enhance the detection performance of BBTT systems. We examine the performance of the proposed techniques through theoretical analysis, computer simulations, and laboratory experiments with a prototype tag that we have developed. <s> BIB011 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter technology that utilizes the ambient radio frequency signals to enable the communications of battery-free devices has attracted much attention recently. In this paper, we study the problem of signal detection for an ambient backscatter communication system that adopts the differential encoding to eliminate the necessity of channel estimation. Specifically, we formulate a new transmission model, design the data detection algorithm, and derive two closed-form detection thresholds. One threshold is used to approximately achieve the minimum sum bit error rate (BER), while the other yields balanced error probabilities for “0” bit and “1” bit. The corresponding BER expressions are derived to fully characterize the detection performance. In addition, the lower and the upper bounds of BER at high signal-to-noise ratio regions are also examined to simplify a performance analysis. Simulation results are then provided to corroborate the theoretical studies. <s> BIB012 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter communications (AmBC) enables radio-frequency (RF) powered devices (e.g., tags, sensors) to modulate their information bits over ambient RF carriers in an over-the-air manner. This system, called ``modulation in the air'', thus has emerged as a promising technology for green communications and future Internet-of-Things. This paper studies the AmBC system over ambient orthogonal frequency division multiplexing (OFDM) carriers in the air. We first establish the system model for such AmBC system from spread-spectrum perspective, from which a novel joint design for tag waveform and reader detector is proposed. We construct the test statistic that cancels out the direct-link interference by exploiting the repeating structure of the ambient OFDM signals due to the use of cyclic prefix. The maximum-likelihood detector is proposed to recover the tag bits, for which the optimal threshold is obtained with closed-form expression. Also, we analyze the effect of various system parameters on the transmission rate and detection performance. Finally, extensive numerical results show that the proposed transceiver design outperforms the conventional design. <s> BIB013 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Nowadays, the explosive growth of Internet-of-Things-related applications has required the design of low-cost and low-power wireless sensors. Although backscatter radio communication is a mature technology used in radio frequency (RF) identification applications, ambient backscattering is a novel approach taking advantage of ambient signals to simplify wireless system topologies to just a sensor node and a receiver (RX) circuit eliminating the need for a dedicated carrier source. This paper introduces a novel wireless tag and RX system that utilizes broadcast frequency modulated (FM) signals for backscatter communication. The proposed proof-of-concept tag comprises of an ultralow-power microcontroller (MCU) and a RF front-end for wireless communication. The MCU can accumulate data from multiple sensors through an analog-to-digital converter, while it transmits the information back to the RX through the front-end by means of backscattering. The front-end uses ON–OFF keying modulation and FM0 encoding on ambient FM station signals. The RX consists of a commercial low-cost software-defined radio which downconverts the received signal to baseband and decodes it using a suitable signal processing algorithm. A theoretical analysis of the error rate performance of the system is provided and compared to bit-error-rate measurements on a fixed transmitter-tag-RX laboratory setup with good agreement. The prototype tag was also tested in a real-time indoor laboratory deployment. Operation over a 5-m tag-reader distance was demonstrated by backscattering information at 2.5 kb/s featuring an energy per packet of $36.9~\mu \text{J}$ . <s> BIB014 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter is a promising solution for future Internet of Things (IoT) composed of massive low-power wireless devices without external power sources. It is challenging to accommodate these devices with high throughput and long range because of the low efficiency inherent in ambient backscatter. To tackle the challenging issue, we propose an optimum modulation and coding scheme (MCS) for ambient backscatter communication networks (ABCN). We model and analyze the ABCN embedded by semi-passive tags with M-ary modulation and full-duplex WiFi access points as reader. The coverage and network capacity are derived, by which the effect of various system parameters including reflection coefficient, code rate, and cluster size is investigated. It is shown that the optimum MCS for the ABCN will provide highly increased throughput and range for future IoT networks and smart home/office. <s> BIB015 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter technology has attracted much attention recently, because it enables battery-free devices, such as tags or sensors, to communicate through wireless energy harvesting and radio backscattering. Existing studies about ambient backscatter assume that the tags have only two states: backscatter or non-backscatter. Actually, some references have shown that the tags can readily realize three states: positive and negative phase backscatter, and non-backscatter. In this paper, we propose a new coding scheme for these tags with three states to improve the throughput of the ambient backscatter communication system. We then design a maximum a posteriori (MAP) detector for the reader to extract binary information from ternary coded signals. We also analyze the detection performance in terms of closed-form bit error rate (BER) expressions. It is found that the proposed coding scheme can improve the throughput of an ambient backscatter system, and there exists an error floor for the BER curve. Finally, simulation results are provided to corroborate our theoretical studies. <s> BIB016 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter, an emerging communication mechanism where battery-free devices communicate with each other via backscattering ambient radio frequency (RF) signals, has achieved much attention recently because of its desirable application prospects in the Internet of Things. In this paper, we formulate a practical transmission model for an ambient backscatter system, where a tag wishes to send some low-rate messages to a reader with the help of an ambient RF signal source, and then provide fundamental studies of noncoherent symbol detection when all channel state information of the system is unknown. For the first time, a maximum likelihood detector is derived based on the joint probability density function of received signal vectors. In order to ease availability of prior knowledge of the ambient RF signal and reduce computational complexity of the algorithm, we design a joint-energy detector and derive its corresponding detection threshold. The analytical bit error rate (BER) and BER-based outage probability are also obtained in a closed form, which helps with designing system parameters. An estimation method to obtain detection-required parameters and comparison of computational complexity of the detectors are presented as complementary discussions. Simulation results are provided to corroborate theoretical studies. <s> BIB017 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter is a new technology that uses ambient signals to enable communication between battery-free tags (or sensors) and readers. Previous studies on ambient backscatter focused on scenarios with only one tag. In this paper, we investigate the ambient backscatter communication system with multiple tags and analyze the performance of tag selection. Specifically, we formulate a tag selection scheme and design the corresponding detection method. Our results indicate that obtaining the closed-form bit-error rate (BER) in such cases is challenging; hence, we provide a solution for deriving an approximate BER. The simulation results show that the approximated BER curves are in agreement with the exact curves. <s> BIB018 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Wi-Fi has traditionally been considered a power-consuming communication system and has not been widely adopted in the sensor network and Internet of Things (IoT) space. We introduce Passive Wi-Fi that demonstrates that one can generate 802.11b transmissions using backscatter communication, while consuming 3-4 orders of magnitude lower power than existing Wi-Fi chipsets. Passive Wi-Fi transmissions can be decoded on any Wi-Fi device including routers, mobile phones and tablets. Our experimental evaluation shows that passive Wi-Fi transmissions can be decoded on off-the-shelf smartphones and Wi-Fi chipsets over distances of up to 100 feet. We also design a passive Wi-Fi IC that shows that 1 and 11 Mbps transmissions consume 14.5 and 59.2 ?W respectively. This translates to 10000x lower power than existing Wi-Fi chipsets and 1000x lower power than Bluetooth LE and ZigBee. <s> BIB019 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> This paper enables connectivity on everyday objects by transforming them into FM radio stations. To do this, we show for the first time that ambient FM radio signals can be used as a signal source for backscatter communication. Our design creates backscatter transmissions that can be decoded on any FM receiver including those in cars and smartphones. This enables us to achieve a previously infeasible capability: backscattering information to cars and smartphones in outdoor environments. ::: Our key innovation is a modulation technique that transforms backscatter, which is a multiplication operation on RF signals, into an addition operation on the audio signals output by FM receivers. This enables us to embed both digital data as well as arbitrary audio into ambient analog FM radio signals. We build prototype hardware of our design and successfully embed audio transmissions over ambient FM signals. Further, we achieve data rates of up to 3.2 kbps and ranges of 5-60 feet, while consuming as little as 11.07{\mu}W of power. To demonstrate the potential of our design, we also fabricate our prototype on a cotton t-shirt by machine sewing patterns of a conductive thread to create a smart fabric that can transmit data to a smartphone. We also embed FM antennas into posters and billboards and show that they can communicate with FM receivers in cars and smartphones. <s> BIB020 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter communication (AmBC) enables radio-frequency (RF) powered backscatter devices (BDs) (e.g., sensors, tags) to modulate their information bits over ambient RF carriers in an over-the-air manner. This technology also called "modulation in the air", thus has emerged as a promising solution to achieve green communications for future Internet-of-Things. This paper studies an AmBC system by leveraging the ambient orthogonal frequency division multiplexing (OFDM) modulated signals in the air. We first model such AmBC system from a spread-spectrum communication perspective, upon which a novel joint design for BD waveform and receiver detector is proposed. The BD symbol period is designed to be in general an integer multiplication of the OFDM symbol period, and the waveform for BD bit `0' maintains the same state within a BD symbol period, while the waveform for BD bit `1' has a state transition in the middle of each OFDM symbol period within a BD symbol period. In the receiver detector design, we construct the test statistic that cancels out the direct-link interference by exploiting the repeating structure of the ambient OFDM signals due to the use of cyclic prefix. For the system with a single-antenna receiver, the maximum-likelihood detector is proposed to recover the BD bits, for which the optimal threshold is obtained in closed-form expression. For the system with a multi-antenna receiver, we propose a new test statistic, and derive the optimal detector. Moreover, practical timing synchronization algorithms are proposed, and we also analyze the effect of various system parameters on the system performance. Finally, extensive numerical results are provided to verify that the proposed transceiver design can improve the system bit-error-rate (BER) performance and the operating range significantly, and achieve much higher data rate, as compared to the conventional design. <s> BIB021 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> Ambient backscatter communication (AmBC) enables a passive backscatter device to transmit information to a reader using ambient RF signals, and has emerged as a promising solution to green Internet-of-Things (IoT). Conventional AmBC receivers are interested in recovering the information from the ambient backscatter device (A-BD) only. In this paper, we propose a cooperative AmBC (CABC) system in which the reader recovers information not only from the A-BD, but also from the RF source. We first establish the system model for the CABC system from spread spectrum and spectrum sharing perspectives. Then, for flat fading channels, we derive the optimal maximum-likelihood (ML) detector, suboptimal linear detectors as well as successive interference-cancellation (SIC) based detectors. For frequency-selective fading channels, the system model for the CABC system over ambient orthogonal frequency division multiplexing (OFDM) carriers is proposed, upon which a low-complexity optimal ML detector is derived. For both kinds of channels, the bit-error-rate (BER) expressions for the proposed detectors are derived in closed forms. Finally, extensive numerical results have shown that, when the A-BD signal and the RF-source signal have equal symbol period, the proposed SIC-based detectors can achieve near-ML detection performance for typical application scenarios, and when the A-BD symbol period is longer than the RF-source symbol period, the existence of backscattered signal in the CABC system can enhance the ML detection performance of the RF-source signal, thanks to the beneficial effect of the backscatter link when the A-BD transmits at a lower rate than the RF source. <s> BIB022 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. Performance Improvement for Ambient Backscatter Communications Systems <s> In this paper, we consider a novel ambient backscatter multiple-access system, where a receiver (Rx) simultaneously detects the signals transmitted from an active transmitter (Tx) and a backscatter Tag. Specifically, the information-carrying signal sent by the Tx arrives at the Rx through two wireless channels: the direct channel from the Tx to the Rx, and the backscatter channel from the Tx to the Tag and then to the Rx. The received signal from the backscatter channel also carries the Tag's information because of the multiplicative backscatter operation at the Tag. This multiple-access system introduces a new channel model, referred to as multiplicative multiple-access channel (M-MAC). We analyze the achievable rate region of the M-MAC, and prove that its region is strictly larger than that of the conventional time-division multiple-access scheme in many cases, including, e.g., the high SNR regime and the case when the direct channel is much stronger than the backscatter channel. Hence, the multiplicative multiple-access scheme is an attractive technique to improve the throughput for ambient backscatter communication systems. Moreover, we analyze the detection error rates for coherent and noncoherent modulation schemes adopted by the Tx and the Tag, respectively, in both synchronous and asynchronous scenarios, which further bring interesting insights for practical system design. <s> BIB023
1) Communication Improvement: Although ABCSs possess many advantages as mentioned above, their communication ranges and bitrates are very limited. In particular, for the first ABCS introduced in , to achieve a target BER of 10 −2 , the backscatter receiver can receive at a rate of 1 kbps at distance up to 2.5 feet for an outdoor environment and up to 1.5 feet for an indoor environment. Thus, solutions to improve communication efficiency for the ABCSs need to be developed. Barott BIB004 first indicate that received signals s(t) at the backscatter receiver consist of ambient RF signals r(t), reflected signals A(t), and noise n(t). Thus, it is able to recover A(t) from s(t) by cross-correlation s(t) r (t). The cross-correlation measures the similarity between two signals and is represented by the notation . To do so, the passive coherent processing is proposed with four stages as shown in Fig. 8 . Block diagram of the signal processing for the passive backscatter receiver BIB004 . Fig. 8 . First, the received signals are passed to the remodulation stage to recover r(t). In particular, the remodulation stage isolates a copy of r(t) from the received signals s(t) by using modulators and a demodulator. In general, the remodulation process generates two output waveforms. The first waveform is a clutter-free and noise-free reference signal used in adaptive clutter cancellation in the ambient RF signals, i.e., the direct path interference (DPI) cancellation. The second waveform is a mismatched reference signal used in cross-correlation, i.e., the mismatched processing. The principles of the remodulation can be found in BIB003 . Then, the noise, i.e., DPI, is eliminated in DPI cancelling stage by using the Wiener-Hopf filtering BIB002 and the extensive cancellation algorithm . Finally, the originally-transmitted signals are recovered from the noiseless signals in the correlation processing and timefrequency analysis stages. Then, the data sent by a backscatter transmitter is extracted through a demodulator. Theoretical analyses demonstrate that the passive coherent processing can achieve a bitrate of 1 kbps at the range of 100 meters with the TV tower operating at 626-632 MHz. Daskalakis et al. BIB014 introduce an ambient backscatter communications system that utilizes broadcast FM signals. In particular, the backscatter transmitter adopts OOK modulation and FM0 encoding on the ambient signals from an FM station to transmit data. At the backscatter receiver, an algorithm is employed to derive original data sent from the backscatter transmitter. The key idea of this algorithm is reducing the difference between frequencies at the backscatter transmitter and backscatter receiver, i.e., CFO correction. Then, a matched filter and a downsampling component are applied to remove noise and interference of the received signals to improve system performance. From the experimental results, the proposed backscatter system can achieve a bitrate of 2.5 kbps over a distance of 5 meters between the backscatter transmitter and backscatter receiver. Bharadia et al. BIB007 introduce BackFi which offers high bitrate and long-range communication between backscatter sensor transmitters. Unlike , BackFi uses a Wi-Fi AP as an ambient RF source as well as a backscatter receiver. Thus, the backscatter sensor transmitters are able to not only communicate with each other, but also connect to the Wi-Fi AP. BackFi is different from RFID systems since it reuses ambient signals from the Wi-Fi AP which is already deployed for standard wireless networks. Bharadia et al. BIB007 focus on improving the performance of the uplink transmission from the Fig. 9 . Architecture of the backscatter receiver used in BackFi BIB007 . backscatter sensor transmitter to the Wi-Fi AP, i.e., BackFi AP. An important finding is that self-interference at the backscatter receiver, i.e., BackFi AP, can significantly reduce the communication range and transmission rate of the system. The self-interference arises from two sources: (i) signals from the Wi-Fi AP and (ii) reflected signals from non-transmitter objects in the environment. Then, a self-interference cancellation technique is proposed for the backscatter receiver as shown in Fig. 9 . The Wi-Fi signals x, which are sent to a client, e.g., a laptop, are reflected by the environment and by a backscatter sensor transmitter. First, the reflected signals by the environment, i.e., h env , are extracted from the received signals by using digital and analog finite impulse response filters, i.e., cancellation filters. The remaining signals after cancellation are used to estimate backward and forward channels, i.e., h b and h f , respectively. However, h b and h f are in a cascaded form, i.e., h b * h f , where * represents the convolution of two signals. Therefore, the authors use the maximal-ratio combining (MRC) technique BIB001 to recover the data from the backscatter sensor transmitter, i.e., θ(t), from h b * h f signals. Then, the Viterbi decoder is adopted to extract useful information. The authors then implement an experiment in an indoor environment with multi-path reflections. The experimental results show that BackFi can achieve the throughput of 5 Mbps at a range of 1 meter and the throughput of 1 Mbps at a range of 5 meters with a 2.4 GHz Wi-Fi AP. Another technique which aims to reduce self-interference at the backscatter receiver is introduced in BIB008 , named frequency-shifted backscatter. The key idea of this technique is that the backscatter transmitters shift the ambient RF signals, i.e., Wi-Fi signals, to an adjacent frequency band before reflecting. As such, the backscatter receiver can decode data from the reflected signals without self-interference. To do so, the authors use an oscillator at the backscatter transmitter to shift the RF signals by 20 MHz. The experimental results demonstrate that frequency-shifted backscatter can achieve a bitrate of 50 kbps at a range of 3.6 meters with BER of 10 −3 . Parks et al. BIB005 design a multi-antenna backscatter transmitter, i.e., μmo, and a low-power coding mechanism, i.e., μcode, to improve communication performance in terms of data rates and transmission ranges. By using multiple antennas, we can eliminate interference from ambient RF signals, e.g., TV signals, thereby increasing the bitrate between the backscatter transmitters. The main design principle of μmo is shown in Fig. 10 . Let s(t) be RF signals from a TV tower and Bob transmits data by reflecting and absorbing s(t) to convey bit '1' and bit '0', respectively. The received signals at two antennas of Alice are expressed as follows: where h rf , h rf and h b , h b are the channels from the TV tower and Bob to the two antennas of Alice, respectively. In addition, B takes a value of '0' or '1' depending on non-reflecting or reflecting state, respectively. By dividing (12) together, we have the following fraction: From (13), this fraction is independent of the TV signals, i.e., s(t). Since the value of B is either '0' or '1', the fraction results in two levels, i.e., , corresponding to the nonreflecting and reflecting states, respectively. Therefore, Alice can decode data sent from Bob without estimating channel parameters. Moreover, a low-power coding scheme based on CDMA, is proposed to increase the communication ranges between the backscatter transceivers. In this scheme, the backscatter transmitter encodes bit '0' and '1' into different chip sequences, and the backscatter receiver correlates the received signals with the chip sequence patterns to decode the data. Longer chip sequences for encoding can be used at both the backscatter transmitter and the backscatter receiver to increase the SNR. The authors then implement μmo and μcode on a circuit board to evaluate their performance. The experimental results show that μmo increases the bitrate up to 1 Mbps at distances from 4 feet to 7 feet and μcode increases the communication ranges up to 80 feet at 1 kbps by backscattering signals from a TV tower operating at 539 MHz. Pérez-Penichet et al. BIB009 propose a solution to improve the bitrate for ABCSs by modifying the encoding technique μcode. Pérez-Penichet et al. BIB009 highlight that encoding multiple bits per symbol can significantly increase the bitrate. However, this encoding scheme may make the transmission more sensitive to noise and interference. Thus, the authors use simulations to investigate the trade-off between the bitrate and robustness of the proposed encoding scheme. Note that the proposed scheme encodes two bits per symbol. Instead, μcode encodes only one bit per symbol. The simulation results show that the energy per chip over noise spectral density (E c /N 0 ) of the proposed scheme is higher than that of μcode with the same number of chips per symbol. This means that applying multiple bits per symbol can increase the bitrate. However, with the same value of E c /N 0 , μcode shows better robustness than that of the scheme from BIB009 . In other words, the transmission is more likely to be corrupted by noise and interference when the number of bits per symbol is increased. The authors then conclude that (i) longer chip sequences are more robust and increase the communication range, but result in low bitrates, and (ii) encoding more bits per symbol increases the bitrate, but reduces the robustness. Liu et al. BIB006 introduce a full-duplex technique to improve the performance of ABCSs. In this technique, after receiving reflected signals, the backscatter receiver can send a feedback to the backscatter transmitter to inform any error. The authors indicate that the challenge when designing the fullduplex system is that the amplitudes of the received signals at the backscatter receiver can change considerably when the receiver backscatters to send feedback signals. This issue arises due to the fact that the backscatter receiver uses the same antenna to transmit and receive signals. Therefore, the authors change the impedance of the antenna at the backscatter receiver to create phase shifts to the received signals, and thus the amplitudes of the received signals at the backscatter receiver are maintained. The authors then introduce a protocol with two steps for the feedback channel. First, as soon as the backscatter receiver receives signals sent from the backscatter transmitter, the backscatter receiver begins to transmit preamble bits on the feedback channel. Then, the backscatter receiver divides the received signals into chunks of b bits and computes c-bit checksum for each group of b bits. Second, the backscatter receiver transmits the checksum back to the backscatter transmitter. The values of b and c are determined by a ratio between the transmission rate of data channel, i.e., transmitter-to-receiver channel, and that of feedback channel, i.e., receiver-to-transmitter channel. In this way, the transmission time of both data and feedback are approximately equal. By using the feedback data, the backscatter transmitter can detect errors and collision, and is able to adjust its bitrates based on the channel condition. Additionally, by calculating the c-bit checksum for each chunk of b bits, the full-duplex technique allows the backscatter transmitter to re-transmit a subset of the bits rather than the whole chunk when an error is detected. Liu et al. BIB010 study that the full-duplex technique focuses on mixed transmissions of data and feedback signals, thereby requiring asymmetric rates for transmissions in the opposite directions. This is not feasible for future wireless applications, e.g., IoT, in which communication links among the tremendous number of devices exist. Therefore, Liu et al. BIB010 propose a novel multiple-access scheme, namely time-hopping full-duplex backscatter communication (BackCom), to simultaneously mitigate interference and enable asymmetric full-duplex communications. In particular, the proposed scheme includes two components, i.e., a sequence-switch modulation and full-duplex BackCom. The key idea of the sequence-switch modulation is that bits are transmitted by switching between a pair of time-hopping spread-spectrum sequences with different nonzero chips to represent bits '0' and '1'. By doing this, the interference produced by time-hopping spread-spectrum is reduced. The numerical and simulation results demonstrate that the proposed full-duplex BackCom achieves higher performance in terms of BER, energy-transfer rates, and supporting symmetric full-duplex data rates. However, this system occupies a large spectrum bandwidth since it adopts time-hopping spread-spectrum. In BIB011 , a multi-phase backscatter modulator is introduced to circumvent the phase cancellation problem at the backscatter transmitters. The authors indicate that the phase difference between ambient RF signals and reflected signals at the backscatter receiver can significantly impact the amplitude of received signals. Thus, during the cancellation phase, the backscatter receiver cannot extract data from the received signals. To address this problem, the authors propose a modulator for the backscatter transmitter which enables multi-phase backscattering. Under this scheme, the backscatter transmitter backscatters its data in two successive intervals with different phases. Thus, if there is a cancellation phase during one of the intervals, the other interval, which operates at the different phase, will be immune to the cancellation phase. To further improve the transmission performance, the authors propose a hybrid scheme which combines the backscattered signals in these intervals. With an envelope detector, the backscatter receiver can identify four amplitude differences, which helps to differentiate between the amplitudes of the ambient RF signals and the reflected signals more accurately. The simulation and experimental results show that the proposed solutions successfully avoid the phase cancellation problem, and thus improve communication ranges and robustness for ABCSs. Kim et al. BIB015 propose an optimum modulation and coding scheme to maximize the network capacity of ABCSs. This scheme finds an optimal value of the reflection coefficient α and the code rate ρ. The authors then formulate a joint optimization problem of α and ρ and use line search algorithms such as Golden section method to find the solution. The simulation results demonstrate that the network capacity can be improved by 90% higher compared to the conventional modulations, e.g., BPSK. The authors also note that there is a trade-off in selecting the variables α and ρ. For small α, the backscatter transmitter harvests more energy and reflects fewer signals to the backscatter receiver. Consequently, this may lead to an information outage. For large α, the backscatter transmitter harvests less energy and reflects more signals, and thus possibly resulting in a power outage. Likewise, for large ρ, the bitrate is increased but the reliability of the transmission deteriorates. In contrast, for small ρ, the reliability increases, while the bitrate is reduced. Different from all aforementioned schemes, several works focus on signal detection techniques to improve BER performance of ABCSs. Wang et al. BIB012 introduce an ML detector to minimize BER without requiring channel state information. The authors indicate that the probability density functions of the conditional random variables vary at different transmission slots. Additionally, as the channel state information is unknown, the backscatter receiver cannot distinguish which energy level corresponds to which state. Thus, it is difficult to detect and extract data at the backscatter receiver. Therefore, the ML detector uses an approximate threshold to measure the difference between two adjacent energy levels. If there is a significant change between two successive energy levels, the detector can decode binary symbols sent from the backscatter transmitter. The simulation results show that the proposed ML detector can achieve high BER performance at around 10 −1 and 10 −2 with 5 dB and 30 dB of transmit SNR, respectively. Yang and Liang BIB013 propose a backscatter transceiver design to cancel the direct-link interference for backscatter communications over an ambient orthogonal frequency division multiplexing (OFDM) carrier without increasing hardware complexity. This is a novel joint design for backscatter transmitter waveform and the detector of the backscatter receiver. The time duration of each backscatter transmitter symbol is set to one OFDM symbol period. Thus, for bit '1', there is an additional state transition in the middle of each OFDM symbol period within one backscatter transmitter symbol duration. Therefore, the designed waveform can be easily implemented in low-cost backscatter devices since it has similar characteristics to FM waveform. Additionally, a cyclic prefix is added at the beginning of the OFDM signals to create a repeating structure. With this design, the backscatter receiver can remove the direct-link interference by using an ML detector which exploits the repeating structure of ambient OFDM signals. The simulation results show that the proposed solution can achieve the BER of 9 × 10 −4 with 24 dB of transmit SNR. Yang et al. BIB021 extend the work in BIB013 by considering a transceiver design for multi-antenna backscatter receivers. The authors propose an optimal detector to decode bits from the backscatter transmitter by using a linear combination of the received signals at each antenna. Similar to BIB013 , this detector also exploits the repeating structure of ambient OFDM signals to obtain interference-free signals. The simulation results demonstrate that the multi-antenna backscatter receiver can achieve higher BER performance than that of the singleantenna backscatter receiver. In particular, the BER decreases quickly as the number of the backscatter receiver's antennas increases, i.e., from 0.5 × 10 −2 to about 10 −6 at the SNR of 9 dB when the number of antennas increases from 1 to 6. Yang et al. BIB022 introduce a successive interferencecancellation (SIC) based detector to improve BER performance of ABCSs. The key idea of this detector is that the structural property of the system model is considered. In particular, the backscatter receiver first detects the RF signals, then subtracts its resultant direct-link interference Fig. 11 . Signal constellation and coding scheme BIB016 . from the received signals, and recovers the backscattered signals. Finally, the RF signals are also estimated based on the backscattered signals. The numerical results show that when the backscattered signals and RF signals have equal symbol period, the proposed SIC-based detectors can achieve near-ML detection performance for typical application scenarios. Although ML detectors can achieve a decent detection performance, their computational complexity may not be suitable for low-power backscatter receivers. Hence, Qian et al. BIB017 introduce a low-complexity detector which is able to maintain considerable detection performance. Similar to the ML detector in BIB012 , this detector also uses a detection threshold to determine '0' and '1' bits. Nevertheless, the threshold is computed using statistic variances of the received signals which are easy to derive. Thus, the computational complexity of the proposed detector is significantly reduced. Intuitively, with M transmitted symbols, and the sampling number of the received signals corresponding to one single symbol is N, the ML detector requires at least (18M + N) complex multiplier and adder (CMA) units and 4M exponent arithmetic units for the calculation of two probability density functions. Instead, the proposed detector needs just 4 CMAs. The simulation results also demonstrate that the BER performance of the proposed detector is as good as the ML detector in BIB012 . Liu et al. BIB016 introduce a coding scheme to increase throughput of backscatter communication. In the proposed coding scheme, three states, i.e., reflecting, non-reflecting, and negative-reflecting are used. The reflecting and non-reflecting states are the same as in conventional ABCSs. In the negativereflecting state, the backscatter transmitter adjusts its antenna impedance to reflect RF signals in an inverse phase. With this three states, there are nine points in the signal constellation, and each point represents three-bit symbols as shown in Fig. 11 where L is the unit distance between two adjacent constellation points. Coding theory shows that it is possible to evaluate the BER performance of coding schemes by using the average Euclidean distance . Thus, the coding scheme without point (0,0) has low BER. As such, the proposed coding scheme removes point (0,0) in the signal constellation to minimize the BER. Based on this coding scheme, the authors then design a maximum a posteriori (MAP) detector to detect signals at the backscatter receiver. Both the simulation and theoretical results demonstrate that the proposed solutions can reduce the Fig. 12 . Slotted structure of the communication process between the backscatter receiver and K backscatter transmitters BIB018 . BER to 10 −3 with 15 dB of transmit SNR and increase the throughput up to 10 −1 bits/s/Hz with 20 dB of transmit SNR. 2) Power Reduction: As the amount of energy harvested from ambient RF signals is usually small, backscatter transmitters may not have sufficient power for their operations. Thus, several solutions are proposed to deal with this problem. In fact, these solutions share the same idea as those in BBCSs, i.e., using low-power components in backscatter transmitter circuits. Kellogg et al. BIB019 introduce a passive Wi-Fi backscatter transmitter which harvests energy from a Wi-Fi AP. The backscatter transmitter is designed by using low-power analog devices to reduce its energy consumption. The backscatter modulator of the backscatter transmitter consists of an HMC190BMS8 RF switch [108] to modulate data by adjusting the antenna impedance. Additionally, for baseband processing, the authors use a 65 nm LP CMOS node to save power. The authors then implement a prototype on a DE1 Cyclone II FPGA development board by Altera [150] to measure the power consumption of the backscatter transmitter. The experimental results demonstrate that the passive Wi-Fi backscatter transmitter consumes as low as 14.5 μW at 1 Mbps. Similarly, Wang et al. BIB020 design a low-power backscatter transmitter which uses off-the-shelf components such as Tektronix 3252 arbitrary waveform generator as a modulator, ADG902 as an RF switch, and a 65 nm LP CMOS node as a baseband processing. With this design, the backscatter transmitter consumes only 11.7 μW of power. In BIB006 , a low-power backscatter transmitter is implemented on a four-layer printed circuit board using offthe-shelf components such as ADG919 RF switch [152] connected directly to the antenna of the backscatter transmitter and the STMicroelectronics TS881 [153] as an ultra-low power comparator. Furthermore, with a technique that retransmits a subset of bits rather than the whole packet when an error occurs, the backscatter transmitter can save a significant amount of energy. The authors demonstrate that the proposed backscatter transmitter consumes around 0.25 μW for TX and 0.54 μW for RX. 3) Multiple Access: In ABCSs, there can be several backscatter transmitters operating simultaneously. Therefore, multiple access schemes are needed to achieve optimal network performance. Zhou et al. BIB018 propose a backscatter transmitter selection technique which allows K backscatter transmitters to communicate with a backscatter receiver. The transmission process is divided into slots, and each slot consists of three sub-slots as shown in Fig. 12 . The first sub-slot contains (K + 1)N 0 symbols. Note that the value of N 0 is not fixed. In the first N 0 symbols, the backscatter transmitters do not backscatter RF signals. In the following KN 0 symbols, each backscatter transmitter backscatters RF signals sequentially. In other words, in the k-th N 0 symbols, only the k-th backscatter transmitter backscatters RF signals for its own data transmission. In the second sub-slot, the backscatter receiver selects a backscatter transmitter with the best transmission condition based on the energy levels of received signals in the first subslot. In the third sub-slot, the selected backscatter transmitter is able to transmit data to the backscatter receiver while the other backscatter transmitters remain silent. In this way, the backscatter receiver can handle transmissions from backscatter transmitters without any interference. The simulation results demonstrate that the backscatter transmitter selection technique can allow the backscatter receiver to successfully receive data from 8 backscatter transmitters. Liu et al. BIB023 introduce a multi-access scheme to reduce the direct-link interference at the backscatter receiver. Their work considers an ambient backscatter multi-access system, e.g., for smart-home applications, which allows the backscatter receiver to detect both the signals sent from the RF source and backscatter transmitter instead of adopting cancellation techniques as in most existing works. Specifically, this multi-access system is different from conventional linear additive multi-access systems since backscatter transmitters adopt multiplicative operations, and thus a multiplicative multi-access channel (M-MAC) is also deployed. The numerical results show that the achievable rate region of the M-MAC is larger than that of the conventional TDMA scheme. Moreover, the rate performance of the system in the range of 0-30 dB of the direct-link SNR is significantly improved.
Ambient Backscatter Communications: A Contemporary Survey <s> C. Potential Applications <s> This paper argues for a clean-slate redesign of wireless sensor systems to take advantage of the extremely low power consumption of backscatter communication and emerging ultra-low power sensor modalities. We make the case that existing sensing architectures incur substantial overhead for a variety of computational blocks between the sensor and RF front end - while these overheads were negligible on platforms where communication was expensive, they become the bottleneck on backscatter-based systems and increase power consumption while limiting throughput. We present a radically new design that is minimalist, yet efficient, and designed to operate end-to-end at tens of μWs while enabling high-data rate backscatter at rates upwards of many hundreds of Kbps. In addition, we demonstrate a complex reader-driven MAC layer that jointly considers energy, channel conditions, data utility, and platform constraints to enable network-wide throughput optimizations. We instantiate this architecture on a custom FPGA-based platform connected to microphones, and show that the platform consumes 73x lower power and has 12.5x higher throughput than existing backscatter-based sensing platforms. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Potential Applications <s> Recent years have witnessed the prevalence of wearable devices. Wearable devices are intelligent and multifunctional, but they rely heavily on batteries. This greatly limits their application scope, where replacement of battery or recharging is challenging or inconvenient. We note that wearable devices have the opportunity to harvest energy from human motion, as they are worn by the people as long as being functioning. In this study, we propose a battery-free sensing platform for wearable devices in the form-factor of shoes. It harvests the kinetic energy from walking or running to supply devices with power for sensing, processing and wireless communication, covering all the functionalities of commercial wearable devices. We achieve this goal by enabling the whole system running on the harvested energy from two feet. Each foot performs separate tasks and two feet are coordinated by ambient backscatter communication. We instantiate this idea by building a prototype, containing energy harvesting insoles, power management circuits and ambient backscatter module. Evaluation results demonstrate that the system can wake up shortly after several seconds' walk and have sufficient Bluetooth throughput for supporting many applications. We believe that our framework can stir a lot of useful applications that were infeasible previously. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Potential Applications <s> The Internet-of-Things (IoT) is an emerging concept of network connectivity anytime and anywhere for billions of everyday objects, which has recently attracted tremendous attention from both the industry and academia. The rapid growth of IoT has been driven by recent advancements in consumer electronics, wireless network densification, 5G communication technologies, and cloud-computing enabled big-data analytics. One of the key challenges for IoT is the limited network lifetime due to massive IoT devices being powered by batteries with finite capacities. The low-power and low-complexity backscatter communications (BackCom), which simply relies on passive reflecting and modulation an incident radio-frequency (RF) wave, has emerged to be a promising technology for tackling this challenge. However, the contemporary BackCom has several major limitations, such as short transmission range, low data rate, and uni-directional information transmission. In this article, we present an overview of the next generation BackCom by discussing basic principles, system and network architectures and relevant techniques. Lastly, we describe the IoT application scenarios with the next generation BackCom. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Potential Applications <s> This paper enables connectivity on everyday objects by transforming them into FM radio stations. To do this, we show for the first time that ambient FM radio signals can be used as a signal source for backscatter communication. Our design creates backscatter transmissions that can be decoded on any FM receiver including those in cars and smartphones. This enables us to achieve a previously infeasible capability: backscattering information to cars and smartphones in outdoor environments. ::: Our key innovation is a modulation technique that transforms backscatter, which is a multiplication operation on RF signals, into an addition operation on the audio signals output by FM receivers. This enables us to embed both digital data as well as arbitrary audio into ambient analog FM radio signals. We build prototype hardware of our design and successfully embed audio transmissions over ambient FM signals. Further, we achieve data rates of up to 3.2 kbps and ranges of 5-60 feet, while consuming as little as 11.07{\mu}W of power. To demonstrate the potential of our design, we also fabricate our prototype on a cotton t-shirt by machine sewing patterns of a conductive thread to create a smart fabric that can transmit data to a smartphone. We also embed FM antennas into posters and billboards and show that they can communicate with FM receivers in cars and smartphones. <s> BIB004
The development of ambient backscatter techniques opens considerable opportunities for D2D communications. Thus, ABCSs can be adopted in many applications such as smart life, logistics, and medical biology , BIB003 , BIB004 , , BIB002 , . ABCSs allow devices, i.e., backscatter transmitters, to operate independently with minimal human intervention. 1) Smart World: ABCSs can be deployed in many areas to improve quality of life. For example, in a smart home, a large number of passive backscatter sensor transmitters can be placed at flexible locations, e.g., inside walls, ceilings, and furniture BIB003 . These backscatter sensor transmitters can operate for a long period of time without additional power sources and maintenance. The applications include detection of toxic gases, e.g., smoke and CO, monitoring movements, and surveillance. Zhang et al. BIB001 propose EkhoNet, a high speed ultra-low power backscatter for next generation sensors. EkhoNet is more efficient than the conventional sensing platform, e.g., WISP5.0 or Moo, in terms of power consumption and throughput. In particular, the Ekho system consumes 35 μW of power for sampling an accelerometer at the sampling rate of 400 Hz and 37 μW of power for sampling an audio sensor at a rate of 44 kHz. Furthermore, backscatter transmitters can be embedded inside a variety of objects. A proof-of-concept of ABCSs, i.e., smart card applications, was first introduced in . The authors implement a simple scenario by letting a smart card transmit texts "Hello World" to another smart card. The experimental results show that the texts can be transmitted at a bitrate of 1 kbps and a range of 4 inches with 94% of successful ratio. Wang et al. BIB004 deploy a backscatter transmitter inside a poster. By using ambient signals from a local FM station operating at 94.9 MHz, the poster can transmit data with texts and audio to its receiver, e.g., a smart phone, to show its supplementary contents. The experimental results show that the prototype can achieve a bitrate of 100 bps at distances of up to ten feet with an ambient signal power of −35 dBm to −40 dBm. 2) Biomedical Applications: Biomedical applications such as wearable and implantable health monitoring require small and long-lasting communication devices. Ambient backscatter transmitters can meet these requirements. Some biomedical prototypes have been implemented. For example, Huang et al. BIB002 design a battery-free platform for wearable devices, e.g., smart shoes, through backscattering ambient RF signals. A pair of shoes is implemented with sensors and ambient backscatter modules. The sensor in each shoe performs separate tasks, e.g., counting steps and heart rate, and two shoes are coordinated by using the ambient backscatter modules. The experimental results demonstrate that the system can wake up after 5-9 seconds of walking and transmit data with throughputs of 60 bytes every 48 seconds while jogging (1-2 Hz per foot) and 48 bytes per minute while walking (1 Hz per foot). Furthermore, by using ultra-low power components, the proposed platform consumes a small amount of energy, e.g., the micro-controller operates with only 0.9 μA in sleep mode and 180 μA in active mode. However, the bitrate may significantly reduce when moving speed is high. Another interesting application is introduced in [75], i.e., smart fabric. The authors embed a backscatter module inside a shirt to monitor vital signs such as heart rate and breathing rates. The bitrates between the backscatter module and its receiver, i.e., smart phone, are set to 100 bps and 1.6 kbps, respectively. The experimental results show that with an ambient signal of −35 dBm to −40 dBm, the BER is roughly 0.02 at a bitrate of 1.6 kbps. However, at the low bitrate, i.e., 100 bps, the BER is less than 0.005. 3) Logistics: ABCSs can also be adopted in logistics applications because of its low cost. In , ABCS is implemented to remind when an item is out of place in a grocery store. Each item is equipped with a backscatter transmitter, and has a specific identification number. The backscatter transmitter then broadcasts its identification number in an interval of 5 seconds. Furthermore, all backscatter transmitters in the network periodically listen and store their neighbors' backscatter transmitters. In this way, a backscatter transmitter can indicate whether it is out of place or not by comparing its identification number with those of its neighbors. The experimental results show that the backscatter transmitter just needs less than 20 seconds to successfully detect its location, i.e., out of place or not.
Ambient Backscatter Communications: A Contemporary Survey <s> A. Tag-to-Tag Communication RFID Systems <s> In this paper we propose and discuss an optimized link state routing protocol, named OLSR, for mobile wireless networks. The protocol is based on the link state algorithm and it is proactive (or table-driven) in nature. It employs periodic exchange of messages to maintain topology information of the network at each node. OLSR is an optimization over a pure link state protocol as it compacts the size of information sent in the messages, and furthermore, reduces the number of retransmissions to flood these messages in an entire network. For this purpose, the protocol uses the multipoint relaying technique to efficiently and economically flood its control messages. It provides optimal routes in terms of number of hops, which are immediately available when needed. The proposed protocol is best suitable for large and dense ad hoc networks. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Tag-to-Tag Communication RFID Systems <s> Radio-frequency identification (RFID) blind spots are the regions within the maximum operating range of the RFID system where the RFID reader fails to read the RFID tag. The existence of blind spots have troubled supply chain management and RFID system engineers because any failed or omitted reading of RFID tag would slow down the inventory tracking process. This paper studies the potential locations of blind spots as well as the effectiveness of several blind spots remedy methods such as frequency diversity, spatial diversity, polarization diversity, and antenna beam steering. Using the blind spots creation approach introduced in this paper, the locations of the blind spots can be calculated and visualized. From our simulation results, spatial diversity and polarization diversity are found to be better than all other aforementioned approaches. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Tag-to-Tag Communication RFID Systems <s> In this paper, we describe a novel passive RFID system capable of direct tag-to-tag communication in the presence of external radio frequency field. Tags talk by modulating the external field and thus backscattering the commands to each other. We present the system concept and show its hardware implementation based on TI MSP430 microcontroller. We also provide the theoretical model for modulation depth vs. distance which agrees with experimental results (maximum tag-to-tag communication distance). Finally, we discuss possible applications and outline future work. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Tag-to-Tag Communication RFID Systems <s> A routing protocol for Passive RFID tag-to-tag networks is not reported in contrast with routing protocols for battery powered mobile wireless ad hoc and sensor networks. In this paper, we provide a cross-layer approach for passive RFID tag-to-tag communication. In the data link layer, we designed the medium access control (MAC) protocol which is suitable for passive tag-to-tag communication. In the network layer, we developed the optimal link cost multipath routing (OLCMR) protocol by using modulation depth as the link cost. Simulation results verify that proposed routing protocol consumes less energy when it is compared to optimum link state routing (OLSR). Additionally, when compared to the single path routing, the proposed multipath routing protocol increases the delivery ratio significantly. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> A. Tag-to-Tag Communication RFID Systems <s> Battery-free sensors, such as RFIDs, are annually attached to billions of items including pharmaceutical drugs, clothes, and manufacturing parts. The fundamental challenge with battery-free sensors is that they are only reliable at short distances of tens of centimeters to few meters. As a result, today's systems for communicating with and localizing battery-free sensors are crippled by the limited range. To overcome this challenge, this paper presents RFly, a system that leverages drones as relays for battery-free networks. RFly delivers two key innovations. It introduces the first full-duplex relay for battery-free networks. The relay can seamlessly integrate with a deployed RFID infrastructure, and it preserves phase and timing characteristics of the forwarded packets. RFly also develops the first RF-localization algorithm that can operate through a mobile relay. We built a hardware prototype of RFly's relay into a custom PCB circuit and mounted it on a Parrot Bebop drone. Our experimental evaluation demonstrates that RFly enables communication with commercial RFIDs at over 50 m. Moreover, its through-relay localization algorithm has a median accuracy of 19 centimeters. These results demonstrate that RFly provides powerful primitives for communication and localization in battery-free networks. <s> BIB005
RFID systems are mainly used for tracking and identifying objects, e.g., in supply chain applications. However, RFID systems are only reliable at a range of few meters as the tags rely on RFID readers to backscatter their data. The studies have shown that even if a dense infrastructure of RFID readers is deployed, still 20-80% of RFID tags may be located in blind spots BIB002 . This is stems from the fact that the communication between the tags and the reader can be adversely affected by interference or orientation misalignment BIB002 . Consequently, RFID readers need to be carefully deployed around the areas, e.g., in warehouses, to collect information from the tags. This is a major challenge for Amazon and Walmart nowadays BIB005 . Therefore, novel RFID systems are proposed to deal with this problem. Nikitin et al. BIB003 propose a passive RFID system in which tags can communicate with each other directly. To do so, the tags backscatters signals from an RF source. If these signals Fig. 13 . Block diagram of the proof-of-concept passive tag-to-tag communication system BIB003 . are strong, the tags can be completely passive. Otherwise, the tags are equipped with batteries, i.e., semi-passive, but they still communicate with each other by backscattering and require no active RF transmitter. Nikitin et al. BIB003 then introduce a proof-of-concept passive tag-to-tag communication system as shown in Fig. 13 . This system works in a master-slave mode, which is compatible with existing Gen2 tags . The master tag backscatters commands, i.e., queries, to the slave tags around it, then receives and decodes tag identification number and other data simultaneously. The slave tags are simple Gen2 tags which respond to the master tag's commands through their RN16 messages . The RN16 is a 16-bit random number generated by the tag, and used for tag identification. The RF signal analyzer is deployed to ensure that the Gen2 tags correctly respond to the master tag. The experimental results demonstrate that the proposed tag-to-tag communication system is feasible. However, the authors note that the maximum reliable tag-to-tag communication distance, i.e., the distance between the master and the slave tag, is below 1 inch, since the communication is very sensitive to the positions of the tag. Being inspired by BIB003 , Niu and Jagannathan BIB004 propose a cross-layer design to improve the performance of tag-to-tag communication systems. This approach consists of two protocols, i.e., a multiple access protocol in the data link layer and a routing protocol in the network layer. For the multiple access protocol, similar to Ethernet or 802.11, a carrier sense multiple access (CSMA) scheme is adopted, i.e., network allocation vector. However, this scheme requires timers to run precisely and consistently, and thus it needs more computational resources as well as memories at the tags. Therefore, the authors propose a Dual-ACK virtual carriersensing method. The key idea is that the network allocation vector table is updated only when request-to-send, clear-tosend, and acknowledgment messages are detected. In addition, to deal with the hidden-node problem, two acknowledgment messages are used to ensure that the states of the network allocation vector table are correct. As such, the tag does not need to rely on its timers, and the access to the transmission medium is reduced accordingly. For the routing protocol, the authors design an optimal link cost multi-path routing (OLCMR) protocol based on modulation depth, i.e., the ratio of the higher voltage level to the lower voltage level of the demodulated signals. Similar to the optimum link state routing (OLSR) protocol BIB001 , OLCMR also constructs the routing table in which the destination address, next-hop address, the number of hops, and cost to that destination are included. However, unlike OLSR, the cost in OLCMR is computed by using the modulation depth. The bigger the modulation depth is, the more energy the tags can harvest. The simulation results demonstrate that the proposed cross-layer design improves the performance of tag-to-tag communication networks in terms of end-to-end (E2E) delay, E2E cost, i.e., the modulation depth, and packet delivery ratio. In particular, the E2E delay is around 15 ms when the number of tags is 160 in the case without collision and around 70 ms with 140 tags in the case with a collision. Moreover, the delivery ratio significantly increases up to 98% with 150 tags in the field. However, it is noted that there is a trade-off among the delivery ratio, E2E hops, and E2E cost. In particular, the higher delivery ratio requires more hops and incurs a higher cost.
Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> This paper studies the newly emerging wireless powered communication network in which one hybrid access point (H-AP) with constant power supply coordinates the wireless energy/information transmissions to/from a set of distributed users that do not have other energy sources. A "harvest-then-transmit" protocol is proposed where all users first harvest the wireless energy broadcast by the H-AP in the downlink (DL) and then send their independent information to the H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we study the sum-throughput maximization of all users by jointly optimizing the time allocation for the DL wireless power transfer versus the users' UL information transmissions given a total time constraint based on the users' DL and UL channels as well as their average harvested energy values. By applying convex optimization techniques, we obtain the closed-form expressions for the optimal time allocations to maximize the sum-throughput. Our solution reveals an interesting "doubly near-far" phenomenon due to both the DL and UL distance-dependent signal attenuation, where a far user from the H-AP, which receives less wireless energy than a nearer user in the DL, has to transmit with more power in the UL for reliable information transmission. As a result, the maximum sum-throughput is shown to be achieved by allocating substantially more time to the near users than the far users, thus resulting in unfair rate allocation among different users. To overcome this problem, we furthermore propose a new performance metric so-called common-throughput with the additional constraint that all users should be allocated with an equal rate regardless of their distances to the H-AP. We present an efficient algorithm to solve the common-throughput maximization problem. Simulation results demonstrate the effectiveness of the common-throughput approach for solving the new doubly near-far problem in wireless powered communication networks. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> In this paper, we introduce a new model for RF-powered cognitive radio networks with the aim to improve the performance for secondary systems. In our proposed model, when the primary channel is busy, the secondary transmitter is able either to backscatter the primary signals to transmit data to the secondary receiver or to harvest RF energy from the channel. The harvested energy then will be used to transmit data to the receiver when the channel becomes idle. We first analyze the tradeoff between backscatter communication and harvest-then-transmit protocol in the network. To maximize the overall transmission rate of the secondary network, we formulate an optimization problem to find time ratio between taking backscatter and harvest-then-transmit modes. Through numerical results, we show that under the proposed model can achieve the overall transmission rate higher than using either the backscatter communication or the harvest-then-transmit protocol. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> In this paper, we study an overlay RF-powered cognitive radio network with ambient backscatter communications. In the network, when the channel is occupied, the secondary transmitter (ST) can perform either energy harvesting or data transmission using ambient backscattering technique to a gateway. We consider the case that the gateway charges the ST a certain price if the ST transmits information. This leads to questions of how to determine the best price for the gateway and how to find the optimal backscatter time. To address this problem, we propose a Stackelberg game in which the gateway is the leader adapting the price to maximize its profit in the first stage. Meanwhile, the ST chooses its backscatter time to maximize its utility in the second stage. To analyze the game, we apply the backward induction technique. We show that the game always has a unique subgame perfect Nash equilibrium. Additionally, our results provide insights on the impact of the competition on the players' profit and utility. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> In this paper, we propose a novel network model for RF-powered cognitive radio networks and ambient backscatter communications. In the network under consideration, each secondary transmitter is able to backscatter primary signals to the gateway for data transfer or to harvest energy from the primary signals and then use that energy to transmit data to the gateway. To maximize overall network throughput of the network, we formulate an optimization problem with the aim of finding not only an optimal tradeoff between data backscattering time and energy harvesting time, but also time sharing among multiple secondary transmitters. Through the numerical results, we demonstrate that the solution of the optimization problem always achieves the best performance compared with two other baseline schemes. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> This paper introduces a new solution to improve the performance for secondary systems in radio frequency (RF) powered cognitive radio networks (CRNs). In a conventional RF-powered CRN, the secondary system works based on the harvest-then-transmit protocol. That is, the secondary transmitter (ST) harvests energy from primary signals and then uses the harvested energy to transmit data to its secondary receiver (SR). However, with this protocol, the performance of the secondary system is much dependent on the amount of harvested energy as well as the primary channel activity, e.g., idle and busy periods. Recently, ambient backscatter communication has been introduced, which enables the ST to transmit data to the SR by backscattering ambient signals. Therefore, it is potential to be adopted in the RF-powered CRN. We investigate the performance of RF-powered CRNs with ambient backscatter communication over two scenarios, i.e., overlay and underlay CRNs. For each scenario, we formulate and solve the optimization problem to maximize the overall transmission rate of the secondary system. Numerical results show that by incorporating such two techniques, the performance of the secondary system can be improved significantly compared with the case when the ST performs either harvest-then-transmit or ambient backscatter technique. <s> BIB007 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> Ambient backscatter communication technology has been introduced recently, and is quickly becoming a promising choice for self-sustainable communication systems, as an external power supply or a dedicated carrier emitter is not required. By leveraging existing RF signal resources, ambient backscatter technology can support sustainable and independent communications and consequently open up a whole new set of applications that facilitate Internet of things (IoT). In this article, we study an integration of ambient backscatter with wireless powered communication networks (WPCNs). We first present an overview of backscatter communication systems with an emphasis on the emerging ambient backscatter technology. Then we propose a novel hybrid transmitter design by combining the advantages of both ambient backscatter and wireless powered communications. Furthermore, in the cognitive radio environment, we introduce a multiple access scheme to coordinate hybrid data transmissions. The performance evaluation shows that the hybrid transmitter outperforms traditional designs. In addition, we discuss open issues related to ambient backscatter networking. <s> BIB008 </s> Ambient Backscatter Communications: A Contemporary Survey <s> B. RF-Powered Cognitive Radio Networks and Backscatter Communications <s> In this paper, we study the transmission strategy adaptation problem in an RF-powered cognitive radio network, in which hybrid secondary users are able to switch between the harvest-then-transmit mode and the ambient backscatter mode for their communication with the secondary gateway. In the network, a monetary incentive is introduced for managing the interference caused by the secondary transmission with imperfect channel sensing. The sensing-pricing-transmitting process of the secondary gateway and the transmitters is modeled as a single-leader-multi-follower Stackelberg game. Furthermore, the follower sub-game among the secondary transmitters is modeled as a generalized Nash equilibrium problem with shared constraints. Based on our theoretical discoveries regarding the properties of equilibria in the follower sub-game and the Stackelberg game, we propose a distributed, iterative strategy searching scheme that guarantees the convergence to the Stackelberg equilibrium. The numerical simulations show that the proposed hybrid transmission scheme always outperforms the schemes with fixed transmission modes. Furthermore, the simulations reveal that the adopted hybrid scheme is able to achieve a higher throughput than the sum of the throughput obtained from the schemes with fixed transmission modes. <s> BIB009
Cognitive Radio Networks (CRNs) are first introduced in to utilize the spectrum more efficiently. Specifically, in CRNs, a radio can dynamically reconfigure its transmitter parameters, e.g., waveform, protocol, frequency, and transmit power, according to the conditions of its operating environment. In this way, CRNs ensure optimized communications in a given spectrum band. In an RF-powered CRN BIB001 , a secondary transmitter (ST) can harvest energy from signals of a primary transmitter (PT) and use the harvested energy to directly transmit data to the secondary receiver (SR) when the primary transmitter is not transmitting or is sufficiently far away. This is known as the harvest-then-transmit (HTT) method. There are three modes of CRNs, i.e., overlay, underlay, and interweave CRNs. In the overlay mode, the ST can harvest energy when the primary channel is busy and use the energy to transmit data when the primary channel is idle. In the underlay mode, the ST can transmit simultaneously with PTs, and thus the ST has to control its transmit power to avoid interference to the primary receiver. In the interweave mode, the ST can transmit only when the primary channel is idle, with the maximum power in accordance with the spectral mask. Whenever the primary channel is busy, the ST must immediately cease its transmission and search for another white space, i.e., spectrum that is currently not utilized by the PT. To improve the performance of RF-powered CRNs, many works in the literature focus on the integration of backscatter communications and RF-Powered CNRs. While overlay and underlay CRNs have been extensively studied and analyzed, the interweave mode is still an open issue and needs to be investigated. 1) Overlay Cognitive Radio Networks: Hoang et al. BIB004 indicate that the performance of RF-powered CRNs depends greatly on the amount of the harvested energy and the condition of the primary channels. For example, when the amount of harvested energy is too small and/or the channel idle probability is low, i.e., the PT frequently accesses the channel, the total transmitted bits will be reduced. Thus, the authors propose a combination of RF-powered CRN and ambient backscatter communications system, namely RF-powered backscatter CRN, which allows the ST not only to harvest energy from primary signals, but also to transmit data to the SR by backscattering primary signals. Note that the backscatter communications and the energy harvesting cannot efficiently be performed at the same time. The reason is that the amount of harvested energy will be significantly reduced if the ST backscatters data, and thus not enough for the active RF transmission. The authors define three different subperiods corresponding to three activities, i.e., backscattering data, harvesting energy, and transmitting data as shown in Fig. 14. In particular, when the PT transmits data, i.e., the channel is busy, the ST can transmit data by using backscatter communications or harvest energy from the RF signals. Otherwise, when the channel is idle, if there is sufficient energy, the ST transmits data to the SR. Therefore, there is a trade-off between backscatter and HTT time to achieve the optimal network throughput. It is then proved that the network throughput is a convex function, and thus there always exists the globally optimal network throughput. The numerical results show that the solution of the proposed optimization problem can achieve significantly better performance than that of using backscatter communications or HTT protocol alone. Hoang et al. BIB005 consider the case in which the SR charges a price/fee to the ST if the ST backscatters data to the SR. The Stackelberg game model for the RF-powered backscatter Fig. 15 . The structure of the hybrid transmitter BIB008 . CRNs is introduced. In the first stage of the game, the SR, i.e., the leader, offers a price, i.e., for the backscatter time, to the ST, i.e., the follower, such that the expected SR's profit is maximized. Then, in the second stage, given the offered price, the ST chooses its optimal backscatter time to maximize its utility. To find the Stackelberg equilibrium, the authors adopt the backward induction. The simulation results demonstrate that the solution of the Stackelberg game can maximize the profit of the SR while the utility of the ST is optimized given the optimal price of the SR. This work is then extended to multiple STs in BIB009 . In BIB006 , RF-powered CRNs with multiple STs are studied. Similar to BIB004 , the authors formulate an optimization problem to find the trade-off between data backscatter time and energy harvesting time to maximize network throughput. The authors demonstrate that the objective function, i.e., the network throughput, is concave. Thus, there exists a globally optimal trade-off between data backscatter and energy harvesting time, and also time sharing among STs. Lu et al. BIB008 introduce a hybrid transmitter that integrates ambient backscatter with wireless-powered communication capability to improve transmission performance. The structure of this hybrid transmitter is shown in Fig. 15 . The transmitter consists of the following components: • Antenna: shared by an RF energy harvester, a load modulator, and an active RF transceiver, • RF energy harvester: to harvest energy from RF signals, • Load modulator: to modulate data for ambient backscatter communications, and • Active RF transceiver: to transmit or receive active RF signals for wireless-powered communications. In comparison with an ambient backscatter transmitter or a wireless-powered transmitter alone, this hybrid transmitter has many advantages such as supporting long duty-cycle and transmission range. Additionally, the authors propose a multiple access scheme for the ambient backscatter-assisted WPCNs to maximize the sum of throughput of all STs in the CRN. The key idea is similar to that of BIB006 . Through the numerical results, the authors demonstrate the superiority of the proposed hybrid transmitter compared with the traditional designs , BIB002 . 2) Underlay Cognitive Radio Networks: In both BIB004 and BIB005 , the authors just consider overlay CRNs in which the ST can harvest energy when the channel is busy and transmit data when the channel is idle. Instead, in BIB007 , the authors extend the work in BIB004 by considering both overlay and underlay CRNs. Different from the overlay CRN, Fig. 16 . A backscatter radio based wireless-powered communication network BIB003 . in the underlay CRN, the primary channel is always busy. Thus, the transmit power of the ST needs to be controlled to avoid interference to the primary receiver (PR). The authors define a threshold value for the transmit power of the ST to ensure that the interference at the PR is acceptable. Moreover, to maximize the transmission rate, the authors determine an optimal trade-off among backscatter, energy harvesting, and transmission time, under the transmit power constraint of the ST. The simulation results show that the proposed solution can provide a solution for RF-powered CRN nodes to choose the best mode to operate, thereby improving the performance of the whole system.
Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> This paper studies the newly emerging wireless powered communication network in which one hybrid access point (H-AP) with constant power supply coordinates the wireless energy/information transmissions to/from a set of distributed users that do not have other energy sources. A "harvest-then-transmit" protocol is proposed where all users first harvest the wireless energy broadcast by the H-AP in the downlink (DL) and then send their independent information to the H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we study the sum-throughput maximization of all users by jointly optimizing the time allocation for the DL wireless power transfer versus the users' UL information transmissions given a total time constraint based on the users' DL and UL channels as well as their average harvested energy values. By applying convex optimization techniques, we obtain the closed-form expressions for the optimal time allocations to maximize the sum-throughput. Our solution reveals an interesting "doubly near-far" phenomenon due to both the DL and UL distance-dependent signal attenuation, where a far user from the H-AP, which receives less wireless energy than a nearer user in the DL, has to transmit with more power in the UL for reliable information transmission. As a result, the maximum sum-throughput is shown to be achieved by allocating substantially more time to the near users than the far users, thus resulting in unfair rate allocation among different users. To overcome this problem, we furthermore propose a new performance metric so-called common-throughput with the additional constraint that all users should be allocated with an equal rate regardless of their distances to the H-AP. We present an efficient algorithm to solve the common-throughput maximization problem. Simulation results demonstrate the effectiveness of the common-throughput approach for solving the new doubly near-far problem in wireless powered communication networks. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> Backscatter radio communication has become a newly emerging technique for low-rate, low-power and large-scale wireless sensor networks. As this promising technology enables a long-range communication for sensors with low power in a distributed area, it is desirable to support wireless powered communication networks (WPCNs) that experience doubly near-far problem. In a backscatter radio based WPCN, users harvest energy from both the signal broadcast by the hybrid access point and the carrier signal transmitted by the carrier emitter in the downlink and transmit their own information in a passive way via the reflection of the carrier signal using frequency-shift keying modulation in the uplink. We characterize the energy-free condition and the signal-to-noise ratio (SNR) outage zone in a backscatter radio based WPCN. Numerical results demonstrate that the backscatter radio based WPCN achieves an increased long-range coverage and a diminished SNR outage zone compared to the active radio based WPCNs. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> We propose hybrid backscatter communication for wireless powered communication networks (WPCNs) to increase transmission range and provide uniform rate distribution in heterogeneous network (HetNet) environment. In such HetNet where the TV tower or high-power base station (macrocell) coexists with densely deployed small-power access points (e.g., small-cells or WiFi), users can operate in either dedicated bistatic scatter or ambient backscatter, given that the harvested energy from the dedicated or ambient RF signals may not be sufficient to support the existing harvest-then-transmit (HTT) protocol for WPCNs. Considering this dual mode operation, we formulate throughput maximization problem depending on the user location, namely indoor or outdoor zone. After showing the optimal time allocation for the dual mode operation, we show that the performance of the proposed hybrid backscatter communication is superior to the other schemes. <s> BIB003 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> In this paper, we introduce a new model for RF-powered cognitive radio networks with the aim to improve the performance for secondary systems. In our proposed model, when the primary channel is busy, the secondary transmitter is able either to backscatter the primary signals to transmit data to the secondary receiver or to harvest RF energy from the channel. The harvested energy then will be used to transmit data to the receiver when the channel becomes idle. We first analyze the tradeoff between backscatter communication and harvest-then-transmit protocol in the network. To maximize the overall transmission rate of the secondary network, we formulate an optimization problem to find time ratio between taking backscatter and harvest-then-transmit modes. Through numerical results, we show that under the proposed model can achieve the overall transmission rate higher than using either the backscatter communication or the harvest-then-transmit protocol. <s> BIB004 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> In this paper, we propose hybrid backscatter communication for wireless-powered communication networks (WPCNs) to increase transmission range and provide uniform rate distribution in the heterogeneous network (HetNet) environment. In such HetNet, where the TV tower or high-power base station (macrocell) coexists with densely deployed small-power access points (e.g., small-cells or WiFi), users can operate in either bistatic scatter or ambient backscatter, or a hybrid of them, given that the harvested energy from the dedicated or ambient RF signals may not be sufficient enough to support the existing harvest-then-transmit protocol for WPCN, which is extended to the wireless-powered heterogeneous network (WPHetNet). Considering the hybrid and dual mode operation, we formulate a throughput maximization problem depending on the user location, namely Macro-zone or WiFi-zone. After performing the optimal time allocation for the above operation, we show that the proposed hybrid backscatter communication can increase the transmission range of WPHetNet, while achieving uniform rate distribution. <s> BIB005 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> The recent advanced wireless energy harvesting technology has enabled wireless-powered communications to accommodate wireless data services in a self-sustainable manner. However, wireless-powered communications rely on active RF signals to communicate, and result in high power consumption. On the other hand, ambient backscatter technology that passively reflects existing RF signal sources in the air to communicate has the potential to facilitate an implementation with ultra-low power consumption. In this paper, we introduce a hybrid D2D communication paradigm by integrating ambient backscattering with wireless-powered communications. The hybrid D2D communications are self-sustainable, as no dedicated external power supply is required. However, since the radio signals for energy harvesting and for backscattering come from the ambient, the performance of the hybrid D2D communications depends largely on environment factors, e.g., distribution, spatial density, and transmission load of ambient energy sources. Therefore, we design two mode selection protocols for the hybrid D2D transmitter, allowing a more flexible adaptation to the environment. We then introduce analytical models to characterize the impacts of the considered environment factors on the hybrid D2D communication performance. Together with extensive simulations, our analysis shows that the communication performance benefits from larger repulsion, transmission load and density of ambient energy sources. Further, we investigate how different mode selection mechanisms affect the communication performance. <s> BIB006 </s> Ambient Backscatter Communications: A Contemporary Survey <s> C. Wireless-Powered Communication Networks and Backscatter Communication <s> Ambient backscatter communication technology has been introduced recently, and is quickly becoming a promising choice for self-sustainable communication systems, as an external power supply or a dedicated carrier emitter is not required. By leveraging existing RF signal resources, ambient backscatter technology can support sustainable and independent communications and consequently open up a whole new set of applications that facilitate Internet of things (IoT). In this article, we study an integration of ambient backscatter with wireless powered communication networks (WPCNs). We first present an overview of backscatter communication systems with an emphasis on the emerging ambient backscatter technology. Then we propose a novel hybrid transmitter design by combining the advantages of both ambient backscatter and wireless powered communications. Furthermore, in the cognitive radio environment, we introduce a multiple access scheme to coordinate hybrid data transmissions. The performance evaluation shows that the hybrid transmitter outperforms traditional designs. In addition, we discuss open issues related to ambient backscatter networking. <s> BIB007
WPCNs allow devices to use energy from dedicated or ambient RF sources to actively transmit data to their receivers. However, in WPCNs, a wireless-powered transmitter may require a long time to acquire enough energy for active transmissions, and thus the performance of the system is significantly sub-optimal. Therefore, backscatter communication systems, i.e., BBCSs and ABCSs, are integrated with the WPCNs. The motivation is from the fact that the backscatter communications can transmit data by backscattering RF signals without requiring any external power source. Choi and Kim BIB002 introduce an RF-powered bistatic backscatter design aiming to achieve a long-range coverage. The authors propose a solution combining backscatter radios and WPCNs. It is observed that the backscatter transmitters far away from the backscatter receiver can harvest less energy than that of the nearby backscatter transmitters. The authors propose a bistatic backscatter system which is composed of a carrier emitter and a hybrid access point (H-AP) BIB001 as shown in Fig. 16 . The H-AP not only broadcasts RF signals to the backscatter transmitters, but also receives backscattered signals. Therefore, the far backscatter transmitters can harvest energy from RF signals of both the carrier emitter and the backscatter receiver, i.e., the H-AP, to improve network performance. The authors also propose a two-phase transmission protocol for the H-AP. In the first phase, the H-AP uses the downlink Fig. 17 . The wireless-powered heterogeneous network (WPHetNet) model with hybrid backscatter communication BIB003 , BIB005 . to transfer wireless energy to the backscatter transmitters. The backscatter transmitters then reflect data by using FSK modulation on the uplink in the second phase. In contrast, the carrier emitter, which is deployed close to the backscatter transmitters, can always transmit RF signals. As a result, the far backscatter transmitters can derive sufficient energy for their operations. The results show that this network design can extend system coverage range up to 120 meters with 25 dBm and 13 dBm of transmit power at the H-AP and carrier emitter operating at 868 MHz, respectively. Kim et al. BIB003 propose a hybrid backscatter communications for WPCNs to improve transmission range and bitrate. Different from BIB002 , this system adopts dual mode operation of bistatic backscatter and ambient backscatter depending on indoor and outdoor zones, respectively. In particular, the proposed WPCN includes ambient RF sources, i.e., TV towers or high-power base stations, e.g., macrocells, and dedicated RF sources, leading to a wireless-powered heterogeneous network (WPHetNet) as shown in Fig. 17 . If the ST is in the coverage of the carrier emitter, i.e., indoor-zone, it can use both the ambient backscatter and bistatic backscatter, i.e., the dual mode operation. Otherwise, in outdoor-zone, the ST can only adopt ambient backscatter. The authors note that the ST can flexibly select between HTT with bistatic backscatter protocol and HTT with ambient backscatter protocol based on its location, i.e., indoor-zone and outdoor-zone, and energy status. Similar to BIB004 , the authors also define the harvesting time, backscatter time, and data transmission time to formulate an optimal time allocation problem. The objective is to maximize the throughput of the hybrid backscatter communications in indoor-zone. However, in this work, the energy harvesting and backscatter communication processes can be performed at any time while the data transmission is only performed during the channel idle period to protect the PU's signals. The authors show that the optimal time allocation is a concave problem and can be solved by using KKT conditions. The numerical results demonstrate that the proposed hybrid communication can significantly increase the system throughput. In particular, with 25 W of transmit power at the H-AP Fig. 18 . The structure of the hybrid transmitter and hybrid receiver BIB006 . and 23 dBm of transmit power at the dedicated carrier signals, the HTT with bistatic backscatter protocol and the HTT with ambient backscatter protocol can achieve throughput of up to 2.5 kbps and 115 kbps, respectively. Lu et al. BIB006 propose hybrid D2D communications by integrating ambient backscatter and wireless-powered communications to improve the performance of the system. The authors then design a hybrid transmitter and a hybrid receiver as shown in Fig. 18 . Similar to BIB007 , the authors introduce a hybrid receiver which can receive and decode data from both the modulated backscatter and active RF transmission. The hybrid receiver consists of two sub-blocks. The first block adopts a conventional quadrature demodulator, a phase shift module and a phase detector to decode the data from active RF transmission. The second block is a simple circuit composed of three main components, i.e., an envelope average circuit, a threshold calculator, and a comparator, to decode the modulated signals. By such, the hybrid receiver can decode both the ambient backscatter and wireless-powered transmission from the hybrid transmitter. As both ambient backscattering and wireless-powered transmission are based on ambient RF energy harvesting which requires no internal power source, the performance of the hybrid transmitter greatly depends on the environment factors, e.g., density of ambient transmitters and their spatial distribution. Therefore, the authors design a twomode selection protocol for hybrid D2D communications, i.e., power threshold-based protocol and SNR threshold-based protocol. Under the power threshold-based protocol, the hybrid transmitter first detects the available energy harvesting rate. If this rate is lower than the power threshold which needs to power the active RF transmission, the ambient backscatter mode will be used. Otherwise, the HTT mode will be adopted. Under the SNR threshold-based protocol, the hybrid transmitter first tries to transmit data by backscattering. If the SNR of the backscattered signals at the receiver is lower than the threshold to decode information correctly, the transmitter will switch to the HTT mode. The authors then analyze the hybrid D2D communications in terms of energy outage probability, coverage probability, and average throughput. Through the stochastic geometry analysis, it is shown that the D2D communications benefit from larger geographical repulsion among energy sources, transmission load and density of ambient transmitters. Additionally, the power threshold-based protocol is more suitable for scenarios with a high density of ambient transmitters and low interference level. On the contrary, the SNR threshold-based protocol is more suitable for the scenarios where the interference level and density of ambient transmitters are both low or both high.
Ambient Backscatter Communications: A Contemporary Survey <s> D. Backscatter Relay Networks <s> This paper introduces the first design that enables full-duplex communication on battery-free backscatter devices. Specifically, it gives receivers a way to provide low-rate feedback to the transmitter on the same frequency as that of the backscatter transmissions, using neither multiple antennas nor power-consuming cancellation hardware. Our design achieves this goal using only fully-passive analog components that consume near-zero power. We integrate our design with the backscatter network stack and demonstrate that it can minimize energy wastes that occur due to collisions and also correct for errors and changes in channel conditions at a granularity smaller than that of a packet. To show the feasibility of our design, we build a hardware prototype using off-the-shelf analog components. Our evaluation shows that our design cancels the self-interference down to the noise floor, while consuming only 0.25 μW and 0.54 μW of transmit and receive power, respectively. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Backscatter Relay Networks <s> Battery-free sensors, such as RFIDs, are annually attached to billions of items including pharmaceutical drugs, clothes, and manufacturing parts. The fundamental challenge with battery-free sensors is that they are only reliable at short distances of tens of centimeters to few meters. As a result, today's systems for communicating with and localizing battery-free sensors are crippled by the limited range. To overcome this challenge, this paper presents RFly, a system that leverages drones as relays for battery-free networks. RFly delivers two key innovations. It introduces the first full-duplex relay for battery-free networks. The relay can seamlessly integrate with a deployed RFID infrastructure, and it preserves phase and timing characteristics of the forwarded packets. RFly also develops the first RF-localization algorithm that can operate through a mobile relay. We built a hardware prototype of RFly's relay into a custom PCB circuit and mounted it on a Parrot Bebop drone. Our experimental evaluation demonstrates that RFly enables communication with commercial RFIDs at over 50 m. Moreover, its through-relay localization algorithm has a median accuracy of 19 centimeters. These results demonstrate that RFly provides powerful primitives for communication and localization in battery-free networks. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Backscatter Relay Networks <s> This paper proposes a full duplex relay scheme for ambient backscatter devices which require little or no power for operation. The ambient backscatter devices reflect radio signals from ambient sources such as TV tower, FM radio station, etc. Because of the backscattering of radio signals, such devices have small coverage range. To extend the network coverage of such devices, relaying is the most conventional way, which is previously used for extending the coverage range of radio networks such as LTE, WiFi and so on. In this paper, the backscatter devices assist other backscatter devices to relay the information to their destination nodes. Because of the full duplex characteristics at the relaying node, they can send their own information while receiving signals from the source node. We propose the relaying procedure for full duplex backscatter nodes and investigate the its performance in terms of throughput by performing simulations. <s> BIB003
Although many designs and solutions are introduced to improve the performance of backscatter networks, the singlehop communication range is still limited. One of the practical solutions recently proposed is to use relay nodes. Ma et al. BIB002 introduce an RFID system, named "RFly", that leverages drones as relays to extend the communication range as shown in Fig. 19 . The key idea of RFly is that the drone is configured to collect queries from a reader, forward it to the tag, i.e., backscatter transmitter, and send the tag's reply back to the reader. However, the signals received at the drone's antennas may be affected by interference, i.e., inter-link self-interference and intra-link self-interference. The inter-link self-interference is from the uplink, i.e., from the tag to the reader, and the downlink, i.e., from the reader to the tag, operating at the same frequency. The intra-link selfinterference is the leakage between the drone's receive and transmit antennas. To address the self-interference, the authors adopt a downconvert-upconvert approach and a baseband filter. For the inter-link self-interference, RFly first downconverts the received signals to baseband, low-pass filters for the downlink and bandpass filters for the uplink, and then upconverts before sending. Through this filtering, RFly prevents the relay's self-interference from leaking into the uplink and downlink channels. For the intra-link self-interference, RFly leverages the downconvert-upconvert approach by using different frequencies in the upconvert stage. As such, the frequencies of reader-relay half-link and relay-transmitter half-link are different, and thus the intra-link self-interference is avoided. Through the experiments, the authors demonstrate that RFly can enable the communication between the reader and the tags at over 50 meters in LOS scenarios. Munir et al. BIB003 introduce a relaying technique for fullduplex backscatter devices BIB001 to extend the communication ranges. Specifically, the authors assume that a model in which a source backscatter transmitter, i.e., ST, wants to transmit data to a destination backscatter transmitter, i.e., DT s , but the channel conditions between ST and DT s are not feasible for the transmission. Hence, another backscatter transmitter, i.e., RT, which is located close to ST, is used as a relay between ST and DT s . However, the relay RT may have data which needs to transmit to its own destination, i.e., DT r . Therefore, the authors propose a protocol including two cases, i.e., RT with and without data to transmit. For each case, the transmission time is divided into two phases. ST transmits data to RT in the first phase, and RT transmits the received data to DT s in the second phase. If the relay RT has its own data to transmit in the first phase, RT receives data sent from ST and transmits its data to DT r simultaneously. The authors then set up a simulation test to evaluate the performance of the relaying technique. A TV tower operating at 539 MHz with 10 kW of transmit power is used as an RF source. The ST-to-RT and RT-to-DT s distances are 1 meter. The simulation results show that ST can successfully send data to DT s through the support of RT. Additionally, the ST-to-RT and RT-to-DT s bitrates can be up to 2 kbps and 1 kbps, respectively.
Ambient Backscatter Communications: A Contemporary Survey <s> E. Visible Light Backscatter Communications <s> The ubiquity of the lighting infrastructure makes the visible light communication (VLC) well suited for mobile and Internet of Things (IoT) applications in the indoor environment. However, existing VLC systems have primarily been focused on one-way communications from the illumination infrastructure to the mobile device. They are power demanding and not applicable for communication in the opposite direction. In this paper, we present RetroVLC, a duplex VLC system that enables a battery-free device to perform bi-directional communications over a shared light carrier across the uplink and downlink. The design features a retro-reflector fabric that backscatters light, an LCD modulator, and several low-power optimization techniques. We have prototyped a working system consisting of a credit card-sized battery-free tag and an illuminating LED reader. Experimental results show that the tag can achieve 10kbps downlink speed and 0.5kbps uplink speed over a distance of 2.4m. We outline several potential applications and limitations of the system. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Visible Light Backscatter Communications <s> Visible light communication (VLC) backscatter has been proposed as a wireless access option for Internet of Things. However, the throughput of the state-of-the-art VLC backscatter is limited by simple single-carrier pulsed modulation scheme, such as ON-OFF keying (OOK). In this letter, a novel pixelated VLC backscatter is proposed and implemented to overcome the channel capacity limitation. In particular, multiple smaller VLC backscatters are integrated to generate multi-level signals, which enables the usage of advanced modulation schemes. Based on experimental results, rate adaptation at different communication distances can be employed to enhance the achievable data rate. Compared with OOK, the data rate can be tripled, when 8-pulse amplitude modulation is used at 2 m. In general, $n$ -fold throughput enhancement is realized by utilizing $n$ smaller VLC backscatters, while incurring negligible additional energy using the same device space as that of a single large backscatter. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Visible Light Backscatter Communications <s> This paper investigates the feasibility of practical backscatter communication using visible light for battery-free IoT applications. Based on the idea of modulating the light retroreflection with a commercial LCD shutter, we effectively synthesize these off-the-shelf optical components into a sub- mW low power visible light passive transmitter along with a retroreflecting uplink design dedicated for power constrained mobile/IoT devices. On top of that, we design, implement and evaluate PassiveVLC, a novel visible light backscatter communication system. PassiveVLC system enables a battery-free tag device to perform passive communication with the illuminating LEDs over the same light carrier and thus offers several favorable features including battery-free, sniff-proof, and biologically friendly for human-centric use cases. Experimental results from our prototyped system show that PassiveVLC is flexible with tag orientation, robust to ambient lighting conditions, and can achieve up to 1 kbps uplink speed. Link budget analysis and two proof-of-concept applications are developed to demonstrate PassiveVLC's efficacy and practicality. <s> BIB003
Visible light backscatter communications system (VLBCS) is proposed to enable efficient data transmissions in RF limited environments, e.g., in hospitals or on planes. In general, the principles of VLBCSs are similar to backscatter RF systems. Specifically, Li et al. BIB001 design a backscatter transmitter, namely ViTag, that transmits data by using ambient visible light. ViTag first harvests energy from ambient light through solar cells to support its internal operations. Then, ViTag adopts a liquid crystal display (LCD) shutter to modulate, i.e., block or pass, the light carrier reflected by a retro-reflector. At its backscatter receiver, the modulated signals are amplified, demodulated, digitized, and finally decoded. In other words, ViTag can send its data to the backscatter receiver by backscattering visible light. The experimental results demonstrate that ViTag can achieve a downlink rate of 10 kbps and an uplink rate of 0.5 kbps over a distance of up to 2.4 meters. However, as VLBCSs usually use single-carrier pulsed modulation scheme, i.e., OOK, their throughput is limited. Thus, in BIB002 , the authors extend the idea in BIB001 by using 8pulse amplitude modulation (8-PAM) scheme to increase the throughput. The experimental results show that by using 8-PAM, a bitrate of 600 bps can be achieved at a distance of 2 meters compared to 200 bps when using OOK scheme. To further improve the bitrate of VLBCSs, Xu et al. BIB003 propose a trend-based modulation scheme. In OOK modulation, a symbol is modulated once the LCD completely changes its on/off state, and thus the interval for modulation is not minimized, e.g., 4 ms with ViTag. The authors observe that as soon as the LCD changes its states, even if incompletely, the level of its transparency will change over short time, i.e., 1 ms. This time is long enough to produce a distinguishable decreasing trend on the backscatter receiver side. This means that 1 ms can be used as a minimum modulation interval in the trend-based modulation. As a result, the proposed modulation scheme can achieve a bitrate of up to 1 kbps and 4 times higher than that of the ViTag.
Ambient Backscatter Communications: A Contemporary Survey <s> D. Security and Jamming Issues <s> Backscatter wireless communication lies at the heart of many practical low-cost, low-power, distributed passive sensing systems. The inherent cost restrictions coupled with the modest computational and storage capabilities of passive sensors, such as RFID tags, render the adoption of classical security techniques challenging; which motivates the introduction of physical layer security approaches. Despite their promising potential, little has been done to study the prospective benefits of such physical layer techniques in backscatter systems. In this paper, the physical layer security of wireless backscatter systems is studied and analyzed. First, the secrecy rate of a basic single-reader, single-tag model is studied. Then, the unique features of the backscatter channel are exploited to maximize this secrecy rate. In particular, the proposed approach allows a backscatter system's reader to inject a noise-like signal, added to the conventional continuous wave signal, in order to interfere with an eavesdropper's reception of the tag's information signal. The benefits of this approach are studied for a variety of scenarios while assessing the impact of key factors, such as antenna gains and location of the eavesdropper, on the overall secrecy of the backscatter transmission. Numerical results corroborate our analytical insights and show that, if properly deployed, the injection of artificial noise yields significant performance gains in terms of improving the secrecy of backscatter wireless transmission. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Security and Jamming Issues <s> Backscatter wireless communication is an emerging technique widely used in low-cost and low-power wireless systems, especially in passive radio frequency identification (RFID) systems. Recently, the requirement of high data rates, data reliability, and security drives the development of RFID systems, which motivates our investigation on the physical layer security of a multiple-input multiple-output (MIMO) RFID system. In this paper, we propose a noise-injection precoding strategy to safeguard the system security with the resource-constrained nature of the backscatter system taken into consideration. We first consider a multi-antenna RFID tag case and investigate the secrecy rate maximization (SRM) problem by jointly optimizing the energy supply power and the precoding matrix of the injected artificial noise at the RFID reader. We exploit the alternating optimization method and the sequential parametric convex approximation method, respectively, to tackle the non-convex SRM problem and show an interesting fact that the two methods are actually equivalent for our SRM problem with the convergence of a Karush–Kuhn–Tucker point. To facilitate the practical implementation for resource-constrained RFID devices, we propose a fast algorithm based on projected gradient. We also consider a single-antenna RFID tag case and develop a low-complexity algorithm, which yields the global optimal solution. Simulation results show the superiority of our proposed algorithms in terms of the secrecy rate and computational complexity. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> D. Security and Jamming Issues <s> In this letter, we study an interference avoidance scenario in the presence of a smart interferer, which can rapidly observe the transmit power of a backscatter wireless sensor network (WSN) and effectively interrupt backscatter signals. We consider a power control with a sub-channel allocation to avoid interference attacks and a time-switching ratio for backscattering and RF energy harvesting in backscatter WSNs. We formulate the problem based on a Stackelberg game theory and compute the optimal parameters to maximize a utility function against the smart interference. We propose two algorithms for the utility maximization using Lagrangian dual decomposition for the backscatter WSN and the smart interference to prove the existence of the Stackelberg equilibrium. Numerical results show that the proposed algorithms effectively maximize the utility, compared to that of the algorithm based on the Nash game, so as to overcome smart interference in backscatter communications. <s> BIB003
Due to the simple coding and modulation schemes adopted, backscatter communications are vulnerable to security attacks such as eavesdropping and jamming. The passive nature of backscatter communications making it challenging to secure backscatter secrecy. On one hand, any attacker that uses active RF transmitters can be more powerful to impair the modulated backscatter BIB003 . On the other hand, attacks on the signals sources, e.g., denial-of-service attack, can also jeopardize backscatter communications. Moreover, the resource constraints in backscatter transceivers make them impractical or even impossible to implement typical security solutions such as encryption and digital signature. Some existing research efforts mainly focus on physical-layer security approaches to protect secrecy. For example, BIB002 and BIB001 utilize artificial noise injection with the help of the reader to safeguard backscatter communications in RFID systems. However, this approach cannot be directly adopted in ABCSs as there are not dedicated readers. It is imperative to design simple, yet effective solutions to enable secure ambient backscatter communications.
Ambient Backscatter Communications: A Contemporary Survey <s> E. Millimeter-Wave-Based Ambient Backscatter <s> Millimeter-wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multielement antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban environments. The conclusions are extremely encouraging; measurements in New York City at 28 and 73 GHz demonstrate that, even in an urban canyon environment, significant non-line-of-sight (NLOS) outdoor, street-level coverage is possible up to approximately 200 m from a potential low-power microcell or picocell base station. In addition, based on statistical channel models from these measurements, it is shown that mmW systems can offer more than an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks at current cell densities. Cellular systems, however, will need to be significantly redesigned to fully achieve these gains. Specifically, the requirement of highly directional and adaptive transmissions, directional isolation between links, and significant possibilities of outage have strong implications on multiple access, channel structure, synchronization, and receiver design. To address these challenges, the paper discusses how various technologies including adaptive beamforming, multihop relaying, heterogeneous network architectures, and carrier aggregation can be leveraged in the mmW context. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> E. Millimeter-Wave-Based Ambient Backscatter <s> The first-ever reported Gbps backscatter transmission is presented at millimeter-wave frequencies, extremely expanding the potential of backscatter radio as a low-energy, low-complexity communication platform. Minimal front-ends are implemented that can be used for multi-gigabit communication and RF sensing, achieving scattering frequencies of at least 4 GHz away from a carrier center frequency of 24 GHz. The significantly wideband operation of these minimal communicators will enable broadband wireless transmission with less than 0.15 pJ/bit front-end energy consumption at 4 Gbps and sensing with an extensive number of low-power sensors. The front-ends are additively manufactured using inkjet printing on flexible substrates that can be directly integrated with wearables for challenging mobile applications in 5G and the Internet of Things (IoT). <s> BIB002
Utilizing high-frequency millimeter waves (mmWave) for high speed communication has been deemed as one of the enabling technologies for the fifth-generation cellular networks. Due to different physical characteristics from UHF waves, mmWave requires LOS communication channels and miniaturized high-gain antennas and antenna arrays BIB001 . The recent work in BIB002 demonstrates that the MBCSs working in mmWave bands can achieve a 4 Gigabit backscatter transmission rate with binary modulation. The ABCSs using mmWave are feasible to be developed.
Ambient Backscatter Communications: A Contemporary Survey <s> F. Full-Duplex Based Ambient Backscatter <s> This paper introduces the first design that enables full-duplex communication on battery-free backscatter devices. Specifically, it gives receivers a way to provide low-rate feedback to the transmitter on the same frequency as that of the backscatter transmissions, using neither multiple antennas nor power-consuming cancellation hardware. Our design achieves this goal using only fully-passive analog components that consume near-zero power. We integrate our design with the backscatter network stack and demonstrate that it can minimize energy wastes that occur due to collisions and also correct for errors and changes in channel conditions at a granularity smaller than that of a packet. To show the feasibility of our design, we build a hardware prototype using off-the-shelf analog components. Our evaluation shows that our design cancels the self-interference down to the noise floor, while consuming only 0.25 μW and 0.54 μW of transmit and receive power, respectively. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> F. Full-Duplex Based Ambient Backscatter <s> Future Internet-of-Things (IoT) is expected to wirelessly connect billions of low-complexity devices. For wireless information transfer (WIT) in IoT, high density of IoT devices and their ad hoc communication result in strong interference which acts as a bottleneck on WIT. Furthermore, battery replacement for the massive number of IoT devices is difficult if not infeasible, making wireless energy transfer (WET) desirable. This motivates: (i) the design of full-duplex WIT to reduce latency and enable efficient spectrum utilization, and (ii) the implementation of passive IoT devices using backscatter antennas that enable WET from one device (reader) to another (tag). However, the resultant increase in the density of simultaneous links exacerbates the interference issue. This issue is addressed in this paper by proposing the design of full-duplex backscatter communication (BackCom) networks, where a novel multiple-access scheme based on time-hopping spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT in coexisting backscatter reader-tag links. Comprehensive performance analysis of BackCom networks is presented in this paper, including forward/backward bit-error rates and WET efficiency and outage probabilities, which accounts for energy harvesting at tags, non-coherent and coherent detection at tags and readers, respectively, and the effects of asynchronous transmissions. <s> BIB002
As ambient backscatter devices can communicate with each other, the full-duplex technique can be adopted to improve the performance of ABCSs. For example, Liu et al. BIB001 introduce an ambient backscattering system in which the backscatter receiver can send feedback to the backscatter transmitter to inform any error after receiving. In BIB002 , timehopping full-duplex backscatter communication is introduced to simultaneously mitigate interference and enable asymmetric full-duplex communications. However, these approaches face many critical issues such as interference generated by reflection from the nodes with backscatter antennas and spectrum efficiency. Therefore, further efforts are needed to address these issues.
Ambient Backscatter Communications: A Contemporary Survey <s> G. Wireless Body Area Networks and Ambient Backscatter Communications <s> A new tag for pervasive sensing applications consists of a custom integrated circuit, an antenna for radio frequency energy harvesting, and sensors for monitoring physiological parameters. This paper presents a wearable tag design that can monitor multiple signals. The tag generates an alarm when it suspects a patient emergency. To quickly cover a large portion of the population at risk, we kept the tag affordable (less than US$2 each when manufactured in volume), disposable, small, and easy to use. Such tags would be useful for hospitals, facilities for infants and the elderly, and ordinary homes to detect and alert caregivers to possible problems including SCA and SIDS. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Wireless Body Area Networks and Ambient Backscatter Communications <s> Recent years have witnessed the prevalence of wearable devices. Wearable devices are intelligent and multifunctional, but they rely heavily on batteries. This greatly limits their application scope, where replacement of battery or recharging is challenging or inconvenient. We note that wearable devices have the opportunity to harvest energy from human motion, as they are worn by the people as long as being functioning. In this study, we propose a battery-free sensing platform for wearable devices in the form-factor of shoes. It harvests the kinetic energy from walking or running to supply devices with power for sensing, processing and wireless communication, covering all the functionalities of commercial wearable devices. We achieve this goal by enabling the whole system running on the harvested energy from two feet. Each foot performs separate tasks and two feet are coordinated by ambient backscatter communication. We instantiate this idea by building a prototype, containing energy harvesting insoles, power management circuits and ambient backscatter module. Evaluation results demonstrate that the system can wake up shortly after several seconds' walk and have sufficient Bluetooth throughput for supporting many applications. We believe that our framework can stir a lot of useful applications that were infeasible previously. <s> BIB002 </s> Ambient Backscatter Communications: A Contemporary Survey <s> G. Wireless Body Area Networks and Ambient Backscatter Communications <s> This paper enables connectivity on everyday objects by transforming them into FM radio stations. To do this, we show for the first time that ambient FM radio signals can be used as a signal source for backscatter communication. Our design creates backscatter transmissions that can be decoded on any FM receiver including those in cars and smartphones. This enables us to achieve a previously infeasible capability: backscattering information to cars and smartphones in outdoor environments. ::: Our key innovation is a modulation technique that transforms backscatter, which is a multiplication operation on RF signals, into an addition operation on the audio signals output by FM receivers. This enables us to embed both digital data as well as arbitrary audio into ambient analog FM radio signals. We build prototype hardware of our design and successfully embed audio transmissions over ambient FM signals. Further, we achieve data rates of up to 3.2 kbps and ranges of 5-60 feet, while consuming as little as 11.07{\mu}W of power. To demonstrate the potential of our design, we also fabricate our prototype on a cotton t-shirt by machine sewing patterns of a conductive thread to create a smart fabric that can transmit data to a smartphone. We also embed FM antennas into posters and billboards and show that they can communicate with FM receivers in cars and smartphones. <s> BIB003
Having many prominent features, ambient backscatter can be adopted to facilitate many practical applications for wireless body area networks (WBANs). The recent work in BIB002 designs smart shoes supported by ambient backscatter to count steps or heart rate. Wang et al. BIB003 successfully implement smart fabric applications based on backscatter communications to monitor vital signs such as heart rate and breathing rate. Furthermore, Mandal et al. BIB001 design a low-power, batteryfree sensor to monitor multiple biomedical signals, e.g., heart sounds, electrical heart signals, blood oxygen saturation, respiratory sounds, blood pressure, and body temperature, by implementing a backscatter module to transmit data by using ambient RF signals. However, there are only a few works focusing on this research direction. Thus, further studies on practical applications are needed for communication protocols, network design, and industry standards.
Ambient Backscatter Communications: A Contemporary Survey <s> H. Backscatter Communications for Internet of Things <s> Powering the massive number of small computing devices in the Internet of Things (IoT) is a major challenge because replacing their batteries or powering them with wires is very expensive and impractical. A viable option to enable a perpetual network operation is through the use of far-field Wireless Power Transfer (WPT). We study a large network architecture that uses a combination of WPT and backscatter communication. The network consists of power beacons (PBs) and passive backscatter nodes (BNs), and their locations are modeled as points of independent Poisson point processes (PPPs). The PBs transmit a sinusoidal continuous wave (CW) and the BNs reflect back a portion of this signal while harvesting the remaining part. A BN harvests energy from multiple nearby PBs and modulates its information bits on the composite CW through backscatter modulation. The analysis poses real challenges due to the double fading channel, and its dependence on the PPPs of both BNs and PBs. With the help of stochastic geometry, we derive the coverage probability and the capacity of the network in tractable and easily computable expressions. These expressions depend on the density of both PB and BN, both forward and backward path loss exponents, transmit power of the PB, backscattering efficiency, and number of PBs in a harvesting region. We observe that the coverage probability decreases with an increase in the density of the BNs, while the capacity of the network improves. We compare the performance of this network with a regular powered network in which the BNs have a reliable power source and show that for certain parameters the coverage of the former network approaches that of the regular powered network. <s> BIB001 </s> Ambient Backscatter Communications: A Contemporary Survey <s> H. Backscatter Communications for Internet of Things <s> This letter shows that wirelessly powered backscatter communications is subject to a fundamental tradeoff between the harvested energy at the tag and the reliability of the backscatter communication, measured in terms of SNR at the reader. Assuming the RF transmit signal is a multisine waveform adaptive to the channel state information, we derive a systematic approach to optimize the transmit waveform weights (amplitudes and phases) in order to enlarge as much as possible the SNR-energy region. Performance evaluations confirm the significant benefits of using multiple frequency components in the adaptive transmit multisine waveform to exploit the nonlinearity of the rectifier and a frequency diversity gain. <s> BIB002
With many prominent features, backscatter communications pave the way for empowering IoTs. However, the system performance can be significantly limited due to interference and double fading effects in large network architectures in which the backscatter devices and RF sources are located randomly, e.g., as points of independent Poisson point processes, in the same area. Some approaches are available to address this issue. For example, Bacha et al. BIB001 adopt tools from stochastic geometry to analyze the coverage and capacity of the network. Additionally, adaptive transmit multisine waveforms BIB002 can be used to optimize SNR-energy region for general scenarios including multiple antennas and multiple backscatter devices.
A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Computer vision applications for mobile phones are gaining increasing attention due to several practical needs resulting from the popularity of digital cameras in today's mobile phones. In this work, we consider the task of face detection and authentication in mobile phones and experimentally analyze a face authentication scheme using Haar-like features with Ad-aBoost for face and eye detection, and local binary pattern (LBP) approach for face authentication. For comparison, another approach to face detection using skin color for fast processing is also considered and implemented. Despite the limited CPU and memory capabilities of today's mobile phones, our experimental results show good face detection performance and average authentication rates of 82% for small-sized faces (40times40 pixels) and 96% for faces of 80times80 pixels. The system is running at 2 frames per second for images of 320times240 pixels. The obtained results are very promising and assess the feasibility of face authentication in mobile phones. Directions for further enhancing the performance of the system are also discussed. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> While the retrieval of datasets from human subjects based on demographic characteristics such as gender or race is an ability with wide-ranging application, it remains poorly-studied. In contrast, a large body of work exists in the field of biometrics which has a different goal: the recognition of human subjects. Due to this disparity of interest, existing methods for retrieval based on demographic attributes tend to lag behind the more well-studied algorithms designed purely for face matching. The question this raises is whether a face recognition system could be leveraged to solve these other problems and, if so, how effective it could be. In the current work, we explore the limits of such a system for gender and ethnicity identification given (1) a ground truth of demographically-labeled, textureless 3-D models of human faces and (2) a state-of-the-art face-recognition algorithm. Once trained, our system is capable of classifying the gender and ethnicity of any such model of interest. Experiments are conducted on 4007 facial meshes from the benchmark Face Recognition Grand Challenge v2 dataset. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> In this paper, we present a compositional and dynamic model for face aging. The compositional model represents faces in each age group by a hierarchical And-or graph, in which And nodes decompose a face into parts to describe details (e.g., hair, wrinkles, etc.) crucial for age perception and Or nodes represent large diversity of faces by alternative selections. Then a face instance is a transverse of the And-or graph-parse graph. Face aging is modeled as a Markov process on the parse graph representation. We learn the parameters of the dynamic model from a large annotated face data set and the stochasticity of face aging is modeled in the dynamics explicitly. Based on this model, we propose a face aging simulation and prediction algorithm. Inversely, an automatic age estimation algorithm is also developed under this representation. We study two criteria to evaluate the aging results using human perception experiments: (1) the accuracy of simulation: whether the aged faces are perceived of the intended age group, and (2) preservation of identity: whether the aged faces are perceived as the same person. Quantitative statistical analysis validates the performance of our aging model and age estimation algorithm. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> From a set of images in a particular domain, labeled with part locations and class, we present a method to automatically learn a large and diverse set of highly discriminative intermediate features that we call Part-based One-vs.-One Features (POOFs). Each of these features specializes in discrimination between two particular classes based on the appearance at a particular part. We demonstrate the particular usefulness of these features for fine-grained visual categorization with new state-of-the-art results on bird species identification using the Caltech UCSD Birds (CUB) dataset and parity with the best existing results in face verification on the Labeled Faces in the Wild (LFW) dataset. Finally, we demonstrate the particular advantage of POOFs when training data is scarce. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Automatic face recognition in unconstrained environments is a challenging task. To test current trends in face recognition algorithms, we organized an evaluation on face recognition in mobile environment. This paper presents the results of 8 different participants using two verification metrics. Most submitted algorithms rely on one or more of three types of features: local binary patterns, Gabor wavelet responses including Gabor phases, and color information. The best results are obtained from UNILJ-ALP, which fused several image representations and feature types, and UC-HU, which learns optimal features with a convolutional neural network. Additionally, we assess the usability of the algorithms in mobile devices with limited resources. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Learning semantic attributes for person re-identification and description-based person search has gained increasing interest due to attributes' great potential as a pose and view-invariant representation. However, existing attribute-centric approaches have thus far underperformed state-of-the-art conventional approaches. This is due to their nonscalable need for extensive domain (camera) specific annotation. In this paper we present a new semantic attribute learning approach for person re-identification and search. Our model is trained on existing fashion photography datasets - either weakly or strongly labelled. It can then be transferred and adapted to provide a powerful semantic description of surveillance person detections, without requiring any surveillance domain supervision. The resulting representation is useful for both unsupervised and supervised person re-identification, achieving state-of-the-art and near state-of-the-art performance respectively. Furthermore, as a semantic representation it allows description-based person search to be integrated within the same framework. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples. <s> BIB009 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> We address the challenging large-scale content-based face image retrieval problem, intended as searching images based on the presence of specific subject, given one face image of him/her. To this end, one natural demand is a supervised binary code learning method. While the learned codes might be discriminating, people often have a further expectation that whether some semantic message (e.g., visual attributes) can be read from the human-incomprehensible codes. For this purpose, we propose a novel binary code learning framework by jointly encoding identity discriminability and a number of facial attributes into unified binary code. In this way, the learned binary codes can be applied to not only fine-grained face image retrieval, but also facial attributes prediction, which is the very innovation of this work, just like killing two birds with one stone. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on a new purified large-scale web celebrity database, named CFW 60K, with abundant manual identity and attributes annotation, and experimental results exhibit the superiority of our method over state-of-the-art. <s> BIB010 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper investigates a problem of generating images from visual attributes. Given the prevalent research for image recognition, the conditional image generation problem is relatively under-explored due to the challenges of learning a good generative model and handling rendering uncertainties in images. To address this, we propose a variety of attribute-conditioned deep variational auto-encoders that enjoy both effective representation learning and Bayesian modeling, from which images can be generated from specified attributes and sampled latent factors. We experiment with natural face images and demonstrate that the proposed models are capable of generating realistic faces with diverse appearance. We further evaluate the proposed models by performing attribute-conditioned image progression, transfer and retrieval. In particular, our generation method achieves superior performance in the retrieval experiment against traditional nearest-neighbor-based methods both qualitatively and quantitatively. <s> BIB011 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> As mobile devices are becoming more ubiquitous, it becomes important to continuously verify the identity of the user during all interactions rather than just at login time. This paper investigates the effectiveness of methods for fully-automatic face recognition in solving the Active Authentication (AA) problem for smartphones. We report the results of face authentication using videos recorded by the front camera. The videos were acquired while the users were performing a number of tasks under three different ambient conditions to capture the type of variations caused by the 'mobility' of the devices. An inspection of these videos reveal a combination of favorable and challenging properties unique to smartphone face videos. In addition to variations caused by the mobility of the device, other challenges in the dataset include occlusion, occasional pose changes, blur and face/fiducial points localization errors. We evaluate still image and image set-based authentication algorithms using intensity features extracted around fiducial points. The recognition rates drop dramatically when enrollment and test videos come from different sessions. We will make the dataset and the computed features publicly available1 to help the design of algorithms that are more robust to variations due to factors mentioned above. <s> BIB012 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. <s> BIB013 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications. <s> BIB014 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. <s> BIB015 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Over the last 5 years, methods based on Deep Convolutional Neural Networks (DCNNs) have shown impressive performance improvements for object detection and recognition problems. This has been made possible due to the availability of large annotated datasets, a better understanding of the non-linear mapping between input images and class labels as well as the affordability of GPUs. In this paper, we present the design details of a deep learning system for unconstrained face recognition, including modules for face detection, association, alignment and face verification. The quantitative performance evaluation is conducted using the IARPA Janus Benchmark A (IJB-A), the JANUS Challenge Set 2 (JANUS CS2), and the Labeled Faces in the Wild (LFW) dataset. The IJB-A dataset includes real-world unconstrained faces of 500 subjects with significant pose and illumination variations which are much harder than the LFW and Youtube Face datasets. JANUS CS2 is the extended version of IJB-A which contains not only all the images/frames of IJB-A but also includes the original videos. Some open issues regarding DCNNs for face verification problems are then discussed. <s> BIB016 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> The gap between sensing patterns of different face modalities remains a challenging problem in heterogeneous face recognition (HFR). This paper proposes an adversarial discriminative feature learning framework to close the sensing gap via adversarial learning on both raw-pixel space and compact feature space. This framework integrates cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In the pixel space, we make use of generative adversarial networks to perform cross-spectral face hallucination. An elaborate two-path model is introduced to alleviate the lack of paired images, which gives consideration to both global structures and local textures. In the feature space, an adversarial loss and a high-order variance discrepancy loss are employed to measure the global and local discrepancy between two heterogeneous distributions respectively. These two losses enhance domain-invariant feature learning and modality independent noise removing. Experimental results on three NIR-VIS databases show that our proposed approach outperforms state-of-the-art HFR methods, without requiring of complex network or large-scale training dataset. <s> BIB017 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Attributes are semantically meaningful characteristics whose applicability widely crosses category boundaries. They are particularly important in describing and recognizing concepts where no explicit training example is given, \textit{e.g., zero-shot learning}. Additionally, since attributes are human describable, they can be used for efficient human-computer interaction. In this paper, we propose to employ semantic segmentation to improve facial attribute prediction. The core idea lies in the fact that many facial attributes describe local properties. In other words, the probability of an attribute to appear in a face image is far from being uniform in the spatial domain. We build our facial attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. As a result of this approach, in addition to recognition, we are able to localize the attributes, despite merely having access to image level labels (weak supervision) during training. We evaluate our proposed method on CelebA and LFWA datasets and achieve superior results to the prior arts. Furthermore, we show that in the reverse problem, semantic face parsing improves when facial attributes are available. That reaffirms the need to jointly model these two interconnected tasks. <s> BIB018 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Semantic object parts can be useful for several visual recognition tasks. Lately, these tasks have been addressed using Convolutional Neural Networks (CNN), achieving outstanding results. In this work we study whether CNNs learn semantic parts in their internal representation. We investigate the responses of convolutional filters and try to associate their stimuli with semantic parts. We perform two extensive quantitative analyses. First, we use ground-truth part bounding-boxes from the PASCAL-Part dataset to determine how many of those semantic parts emerge in the CNN. We explore this emergence for different layers, network depths, and supervision levels. Second, we collect human judgements in order to study what fraction of all filters systematically fire on any semantic part, even if not annotated in PASCAL-Part. Moreover, we explore several connections between discriminative power and semantics. We find out which are the most discriminative filters for object recognition, and analyze whether they respond to semantic parts or to other image patches. We also investigate the other direction: we determine which semantic parts are the most discriminative and whether they correspond to those parts emerging in the network. This enables to gain an even deeper understanding of the role of semantic parts in the network. <s> BIB019 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. ::: Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online. <s> BIB020 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods. <s> BIB021 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> We present a method using facial attributes for continuous authentication of smartphone users. We train a bunch of binary attribute classifiers which provide compact visual descriptions of faces. The learned classifiers are applied to the image of the current user of a mobile device to extract the attributes and then authentication is done by simply comparing the calculated attributes with the enrolled attributes of the original user. Extensive experiments on two publicly available unconstrained mobile face video datasets show that our method is able to capture meaningful attributes of faces and performs better than the previously proposed LBP-based authentication method. We also provide a practical variant of our method for efficient continuous authentication on an actual mobile device by doing extensive platform evaluations of memory usage, power consumption, and authentication speed. Display Omitted Facial attributes are effective for continuous authentication on mobile devices.Attribute-based features are more robust than the low-level ones for authentication.Fusion of attribute-based and low-level features gives the best result.The proposed approach allows fast and energy efficient enrollment and authentication. <s> BIB022 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> MeshFace photos have been widely used in many Chinese business organizations to protect ID face photos from being misused. The occlusions incurred by random meshes severely degenerate the performance of face verification systems, which raises the MeshFace verification problem between MeshFace and daily photos. Previous methods cast this problem as a typical low-level vision problem, i.e., blind inpainting. They recover perceptually pleasing clear ID photos from MeshFaces by enforcing pixel level similarity between the recovered ID images and the ground-truth clear ID images and then perform face verification on them. Essentially, face verification is conducted on a compact feature space rather than the image pixel space. Therefore, this paper argues that pixel level similarity and feature level similarity jointly offer the key to improve the verification performance. Based on this insight, we offer a novel feature oriented blind face inpainting framework. Specifically, we implement this by establishing a novel DeMeshNet, which consists of three parts. The first part addresses blind inpainting of the MeshFaces by implicitly exploiting extra supervision from the occlusion position to enforce pixel level similarity. The second part explicitly enforces a feature level similarity in the compact feature space, which can explore informative supervision from the feature space to produce better inpainting results for verification. The last part copes with face alignment within the net via a customized spatial transformer module when extracting deep facial features. All three parts are implemented within an end-to-end network that facilitates efficient optimization. Extensive experiments on two MeshFace data sets demonstrate the effectiveness of the proposed DeMeshNet as well as the insight of this paper. <s> BIB023 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> In this paper, we propose a discriminative aggregation network method for video-based face recognition and person re-identification, which aims to integrate information from video frames for feature representation effectively and efficiently. Unlike existing video aggregation methods, our method aggregates raw video frames directly instead of the features obtained by complex processing. By combining the idea of metric learning and adversarial learning, we learn an aggregation network to generate more discriminative images compared to the raw input frames. Our framework reduces the number of image frames per video to be processed and significantly speeds up the recognition procedure. Furthermore, low-quality frames containing misleading information can be well filtered and denoised during the aggregation procedure, which makes our method more robust and discriminative. Experimental results on several widely used datasets show that our method can generate discriminative images from video clips and improve the overall recognition performance in both the speed and the accuracy for video-based face recognition and person re-identification. <s> BIB024 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> From the emerging of Deep Convolution Neural Network (DCNN), the Visual Information Retrieval would have good prospects for visual features automatically extracted at high semantic levels. However, the deep features could not be robust to some challenges as one-to-many and many-to-one relationships between face identifiers and facial attributes in querying on the face identifier level. To solve these issues at the large-scale level, we proposed a face retrieval system by using the “divide and conquer” method: query by attributes instead of querying by the identifier. We used Facial Attributes in the Fast-Filter stage, after that, our proposed system would retrieve the Face Identifier from the retrieved candidates. DCNN is very useful in the facial attribute learning because of the same network architecture for multiple-attribute groups. We built the attribute learning model following the bottom-up and top-down process. The bottom-up process uses DCNNs with the corresponding face parts and the top-down process is based on our proposed Facial Attribute Ontology (FAO). FAO supports multi-task learning in DCNN, re-usability for other retrieval tasks, flexibility in intelligent queries. We experimented our proposed method on the LFWA and CelebA dataset; our system achieved the average precision at 85.68%, this result is higher than some state-of-the-art methods. In more details, we also outperformed at 25 on 40 attribute detectors. Moreover, we speeded up the retrieval process based on the multi-attribute space and the indexing method named Hierarchical K-means++. At last, on retrieval experiments, we gathered 0.79 and 0.82 MAP-score average for one attribute query in LFWA and CelebA respectively. <s> BIB025 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Metric learning is a significant factor for media retrieval. In this paper, we propose an attribute label enhanced metric learning model to assist face image retrieval. Different from general cross-media retrieval, in the proposed model, the information of attribute labels are embedded in a hypergraph metric learning framework for face image retrieval tasks. The attribute labels serve to build a hypergraph, in which each image is abstracted as a vertex and is contained in several hyperedges. The learned hypergraph combines the attribute label to reform the topology of image similarity relationship. With the mined correlation among multiple facial attributes, the reformed metrics incorporates the semantic information in the general image similarity measure. We apply the metric learning strategy to both similarity face retrieval and interactive face retrieval. The proposed metric learning model effectively narrows down the semantic gap between human and machine face perception. The learned distance metric not only increases the precision of similarity retrieval but also speeds up the convergence distinctively in interactive face retrieval. <s> BIB026 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Face completion is a challenging generation task because it requires generating visually pleasing new pixels that are semantically consistent with the unmasked face region. This paper proposes a geometry-aware Face Completion and Editing NETwork (FCENet) by systematically studying facial geometry from the unmasked region. Firstly, a facial geometry estimator is learned to estimate facial landmark heatmaps and parsing maps from the unmasked face image. Then, an encoder-decoder structure generator serves to complete a face image and disentangle its mask areas conditioned on both the masked face image and the estimated facial geometry images. Besides, since low-rank property exists in manually labeled masks, a low-rank regularization term is imposed on the disentangled masks, enforcing our completion network to manage occlusion area with various shape and size. Furthermore, our network can generate diverse results from the same masked input by modifying estimated facial geometry, which provides a flexible mean to edit the completed face appearance. Extensive experimental results qualitatively and quantitatively demonstrate that our network is able to generate visually pleasing face completion results and edit face attributes as well. <s> BIB027 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods. <s> BIB028 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https://github.com/Prinsphield/ELEGANT. <s> BIB029 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces. <s> BIB030 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> The locations of the fiducial facial landmark points around facial components and facial contour capture the rigid and non-rigid facial deformations due to head movements and facial expressions. They are hence important for various facial analysis tasks. Many facial landmark detection algorithms have been developed to automatically detect those key points over the years, and in this paper, we perform an extensive review of them. We classify the facial landmark detection algorithms into three major categories: holistic methods, Constrained Local Model (CLM) methods, and the regression-based methods. They differ in the ways to utilize the facial appearance and shape information. The holistic methods explicitly build models to represent the global facial appearance and shape information. The CLMs explicitly leverage the global shape model but build the local appearance models. The regression based methods implicitly capture facial shape and appearance information. For algorithms within each category, we discuss their underlying theories as well as their differences. We also compare their performances on both controlled and in the wild benchmark datasets, under varying facial expressions, head poses, and occlusion. Based on the evaluations, we point out their respective strengths and weaknesses. There is also a separate section to review the latest deep learning based algorithms. The survey also includes a listing of the benchmark databases and existing software. Finally, we identify future research directions, including combining methods in different categories to leverage their respective strengths to solve landmark detection "in-the-wild". <s> BIB031 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> In this paper, we study the face attribute learning problem by considering the identity information and attribute relationships simultaneously. In particular, we first introduce a Partially Shared Multi-task Convolutional Neural Network (PS-MCNN), in which four Task Specific Networks (TSNets) and one Shared Network (SNet) are connected by Partially Shared (PS) structures to learn better shared and task specific representations. To utilize identity information to further boost the performance, we introduce a local learning constraint which minimizes the difference between the representations of each sample and its local geometric neighbours with the same identity. Consequently, we present a local constraint regularized multitask network, called Partially Shared Multi-task Convolutional Neural Network with Local Constraint (PS-MCNN-LC), where PS structure and local constraint are integrated together to help the framework learn better attribute representations. The experimental results on CelebA and LFWA demonstrate the promise of the proposed methods. <s> BIB032 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. <s> BIB033 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Recent research progress in facial attribute recognition has been dominated by small improvements on the only large-scale publicly available benchmark dataset, CelebA [18]. We propose to extend attribute prediction research to unconstrained videos. Applying attribute models trained on CelebA – a still image dataset – to video data highlights several major problems with current models, including the lack of consideration for both time and motion. Many facial attributes (e.g. gender, hair color) should be consistent throughout a video, however, current models do not produce consistent results. We introduce two methods to increase the consistency and accuracy of attribute responses in videos: a temporal coherence constraint, and a motionattention mechanism. Both methods work on weakly labeled data, requiring attribute labels for only one frame in a sequence, which we call the anchor frame. The temporal coherence constraint moves the network responses of non-anchor frames toward the responses of anchor frames for each sequence, resulting in more stable and accurate attribute predictions. We use the motion between anchor and non-anchor video frames as an attention mechanism, discarding the information from parts of the non-anchor frame where no motion occurred. This motion-attention focuses the network on the moving parts of the non-anchor frames (i.e. the face). Since there is no large-scale video dataset labeled with attributes, it is essential for attribute models to be able to learn from weakly labeled data. We demonstrate the effectiveness of the proposed methods by evaluating them on the challenging YouTube Faces video dataset [31]. The proposed motion-attention and temporal coherence methods outperform attribute models trained on CelebA, as well as those fine-tuned on video data. To the best of our knowledge, this paper is the first to address the problem of facial attribute prediction in video. <s> BIB034 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Facial makeup transfer aims to translate the makeup style from a given reference makeup face image to another non-makeup one while preserving face identity. Such an instance-level transfer problem is more challenging than conventional domain-level transfer tasks, especially when paired data is unavailable. Makeup style is also different from global styles (e.g., paintings) in that it consists of several local styles/cosmetics, including eye shadow, lipstick, foundation, and so on. Extracting and transferring such local and delicate makeup information is infeasible for existing style transfer methods. We address the issue by incorporating both global domain-level loss and local instance-level loss in an dual input/output Generative Adversarial Network, called BeautyGAN. Specifically, the domain-level transfer is ensured by discriminators that distinguish generated images from domains' real samples. The instance-level loss is calculated by pixel-level histogram loss on separate local facial regions. We further introduce perceptual loss and cycle consistency loss to generate high quality faces and preserve identity. The overall objective function enables the network to learn translation on instance-level through unsupervised adversarial learning. We also build up a new makeup dataset that consists of 3834 high-resolution face images. Extensive experiments show that BeautyGAN could generate visually pleasant makeup faces and accurate transferring results. Data and code are available at http://liusi-group.com/projects/BeautyGAN. <s> BIB035 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles. <s> BIB036 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Face rotation provides an effective and cheap way for data augmentation and representation learning of face recognition. It is a challenging generative learning problem due to the large pose discrepancy between two face images. This work focuses on flexible face rotation of arbitrary head poses, including extreme profile views. We propose a novel Couple-Agent Pose-Guided Generative Adversarial Network (CAPG-GAN) to generate both neutral and profile head pose face images. The head pose information is encoded by facial landmark heatmaps. It not only forms a mask image to guide the generator in learning process but also provides a flexible controllable condition during inference. A couple-agent discriminator is introduced to reinforce on the realism of synthetic arbitrary view faces. Besides the generator and conditional adversarial loss, CAPG-GAN further employs identity preserving loss and total variation regularization to preserve identity information and refine local textures respectively. Quantitative and qualitative experimental results on the Multi-PIE and LFW databases consistently show the superiority of our face rotation method over the state-of-the-art. <s> BIB037 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Facial expression synthesis with various intensities is a challenging synthesis task due to large identity appearance variations and a paucity of efficient means for intensity measurement. This paper advances the expression synthesis domain by the introduction of a Couple-Agent Face Parsing based Generative Adversarial Network (CAFP-GAN) that unites the knowledge of facial semantic regions and controllable expression signals. Specially, we employ a face parsing map as a controllable condition to guide facial texture generation with a special expression, which can provide a semantic representation of every pixel of facial regions. Our method consists of two sub-networks: face parsing prediction network (FPPN) uses controllable labels (expression and intensity) to generate a face parsing map transformation that corresponds to the labels from the input neutral face, and facial expression synthesis network (FESN) makes the pretrained FPPN as a part of it to provide the face parsing map as a guidance for expression synthesis. To enhance the reality of results, couple-agent discriminators are served to distinguish fake-real pairs in both two sub-nets. Moreover, we only need the neutral face and the labels to synthesize the unknown expression with different intensities. Experimental results on three popular facial expression databases show that our method has the compelling ability on continuous expression synthesis. <s> BIB038 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Facial expression synthesis has drawn much attention in the field of computer graphics and pattern recognition. It has been widely used in face animation and recognition. However, it is still challenging due to the high-level semantic presence of large and non-linear face geometry variations. This paper proposes a Geometry-Guided Generative Adversarial Network (G2-GAN) for photo-realistic and identity-preserving facial expression synthesis. We employ facial geometry (fiducial points) as a controllable condition to guide facial texture synthesis with specific expression. A pair of generative adversarial subnetworks are jointly trained towards opposite tasks: expression removal and expression synthesis. The paired networks form a mapping cycle between neutral expression and arbitrary expressions, which also facilitate other applications such as face transfer and expression invariant face recognition. Experimental results show that our method can generate compelling perceptual results on various facial expression synthesis databases. An expression invariant face recognition experiment is also performed to further show the advantages of our proposed method. <s> BIB039 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Deep neural networks have recently been used to edit images with great success, in particular for faces. However, they are often limited to only being able to work at a restricted range of resolutions. Many methods are so flexible that face edits can often result in an unwanted loss of identity. This work proposes to learn how to perform semantic image edits through the application of smooth warp fields. Previous approaches that attempted to use warping for semantic edits required paired data, i.e. example images of the same subject with different semantic attributes. In contrast, we employ recent advances in Generative Adversarial Networks that allow our model to be trained with unpaired data. We demonstrate face editing at very high resolutions (4k images) with a single forward pass of a deep network at a lower resolution. We also show that our edits are substantially better at preserving the subject's identity. The robustness of our approach is demonstrated by showing plausible image editing results on the Cub200 birds dataset. To our knowledge this has not been previously accomplished, due the challenging nature of the dataset. <s> BIB040 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Multi-view face synthesis from a single image is an ill-posed computer vision problem. It often suffers from appearance distortions if it is not well-defined. Producing photo-realistic and identity preserving multi-view results is still a not well-defined synthesis problem. This paper proposes 3D aided duet generative adversarial networks (AD-GAN) to precisely rotate the yaw angle of an input face image to any specified angle. AD-GAN decomposes the challenging synthesis problem into two well-constrained subtasks that correspond to a face normalizer and a face editor. The normalizer first frontalizes an input image, and then the editor rotates the frontalized image to a desired pose guided by a remote code. In the meantime, the face normalizer is designed to estimate a novel dense UV correspondence field, making our model aware of 3D face geometry information. In order to generate photo-realistic local details and accelerate convergence process, the normalizer and the editor are trained in a two-stage manner and regulated by a conditional self-cycle loss and a perceptual loss. Exhaustive experiments on both controlled and uncontrolled environments demonstrate that the proposed method not only improves the visual realism of multi-view synthetic images but also preserves identity information well. <s> BIB041 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper presents a comprehensive study of post-mortem human iris recognition carried out for 1200 near-infrared and 1787 visible-light samples collected from 37 deceased individuals kept in mortuary conditions. We used four independent iris recognition methods (three commercial and one academic) to analyze genuine and impostor comparison scores and check the dynamics of iris quality decay over a period of up to 814 h after death. This study shows that post-mortem iris recognition may be close-to-perfect approximately 5–7 h after death and occasionally is still viable even 21 days after death. These conclusions contradict the statements present in the past literature that the iris is unusable as a biometrics shortly after death, and show that the dynamics of post-mortem changes to the iris that are important for biometric identification are more moderate than previously hypothesized. This paper contains a thorough medical commentary that helps to understand which post-mortem metamorphoses of the eye may impact the performance of automatic iris recognition. An important finding is that false-match probability is higher when live iris images are compared with post-mortem samples than when only live samples are used in comparisons. This paper conforms to reproducible research and the database used in this study is made publicly available to facilitate research on post-mortem iris recognition. To the best of our knowledge, this paper offers the most comprehensive evaluation of post-mortem iris recognition and the largest database of post-mortem iris images. <s> BIB042 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> We present a deep learning-based method for removing makeup effects (de-makeup) in a face image. This problem poses a major challenge due to obscuring of the underlying facial features by cosmetics, which is very important in multimedia applications in the field of security, entertainment, and social networking. To address this task, we propose the bidirectional tunable de-makeup network (BTD-Net), which jointly learns the makeup process to aid in learning the de-makeup process. For tractable learning of the makeup process, which is a one-to-many mapping determined by the cosmetics that are applied, we introduce a latent variable that reflects the makeup style. This latent variable is extracted in the de-makeup process and used as a condition on the makeup process to constrain the one-to-many mapping to a specific solution. Through extensive experiments, our proposed BTD-Net is found to surpass the state-of-art techniques in estimating realistic non-makeup faces that correspond to the input makeup images. We additionally show that applications such as tuning the amount of makeup can be enhanced through the use of this method. <s> BIB043 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> This paper presents a novel approach for synthesizing automatically age-progressed facial images in video sequences using Deep Reinforcement Learning. The proposed method models facial structures and the longitudinal face-aging process of given subjects coherently across video frames. The approach is optimized using a long-term reward, Reinforcement Learning function with deep feature extraction from Deep Convolutional Neural Network. Unlike previous age-progression methods that are only able to synthesize an aged likeness of a face from a single input image, the proposed approach is capable of age-progressing facial likenesses in videos with consistently synthesized facial features across frames. In addition, the deep reinforcement learning method guarantees preservation of the visual identity of input faces after age-progression. Results on videos of our new collected aging face AGFW-v2 database demonstrate the advantages of the proposed solution in terms of both quality of age-progressed faces, temporal smoothness, and cross-age face verification. <s> BIB044 </s> A Survey of Deep Facial Attribute Analysis <s> Introduction <s> Since it is difficult to collect face images of the same subject over a long range of age span, most existing face aging methods resort to unpaired datasets to learn age mappings. However, the matching ambiguity between young and aged face images inherent to unpaired training data may lead to unnatural changes of facial attributes during the aging process, which could not be solved by only enforcing identity consistency like most existing studies do. In this paper, we propose an attribute-aware face aging model with wavelet based Generative Adversarial Networks (GANs) to address the above issues. To be specific, we embed facial attribute vectors into both the generator and discriminator of the model to encourage each synthesized elderly face image to be faithful to the attribute of its corresponding input. In addition, a wavelet packet transform (WPT) module is incorporated to improve the visual fidelity of generated images by capturing age-related texture details at multiple scales in the frequency space. Qualitative results demonstrate the ability of our model in synthesizing visually plausible face images, and extensive quantitative evaluation results show that the proposed method achieves state-of-the-art performance on existing datasets. <s> BIB045
Facial attributes represent intuitive semantic features that describe human-understandable visual properties of face images, such as smiling, eyeglasses, and mustache. Therefore, as vital information of faces, facial attributes have contributed to numerous real-world applications, e.g., face verification (Kumar et al. 2009; BIB004 BIB023 BIB016 , face recognition BIB008 BIB009 BIB017 BIB024 , face retrieval BIB010 BIB025 BIB026 BIB002 , and face image synthesis (Huang et al. 2018a, b; BIB041 BIB027 BIB028 . Facial attribute analysis, aiming to build a bridge between human-understandable visual descriptions and abstract feature representations required by real-world computer vision tasks, has attracted increasing attention and has become a hot research topic. Recently, the development of deep learning techniques has made excellent progress in learning abstract feature representations, leading to significant performance improvements of the current algorithms in the field of deep facial attribute analysis. Deep facial attribute analysis mainly consists of two sub-issues: facial attribute estimation (FAE) and facial (a) FAE (b) FAM Fig. 1 Illustrations of the two sub-issues in deep facial attribute analysis, i.e., FAE and FAM (a comes from CelebA dataset , and b comes from BIB029 attribute manipulation (FAM) . Given a face image, FAE trains attribute classifiers to recognize whether a specific facial attribute is present, and FAM modifies face images to synthesize or remove desired attributes by constructing generative models. We provide concise illustrations of these two sub-issues in Fig. 1 . Deep FAE methods can generally be categorized into two groups: part-based methods and holistic methods. Part-based FAE methods first locate the positions of facial attributes and then extract features according to the obtained localization cues for the subsequent attribute prediction. According to the different schemes for locating facial attributes, part-based methods can be further classified into two subcategories: separate auxiliary localization based methods and end-to-end localization based methods. Specifically, separate auxiliary localization based FAE methods seek help from existing part detectors or auxiliary localization algorithms, e.g., facial key point detection BIB030 BIB031 and semantic segmentation BIB018 BIB019 ). Then, corresponding features from different positions can be extracted for further estimation. Note that the localization and the estimation are performed in a separate and independent manner. On the contrary, end-to-end localization based methods exploit the locations of attributes and predict their presence simultaneously in end-to-end frameworks. In contrast to part-based methods, holistic methods focus more on learning attribute relationships and estimating facial attributes in a unified framework without any additional localization modules. By assigning shared and specific attribute learning to different layers of networks, holistic methods model correlations and distinctions among facial attributes to explore the complementary information. During this process, holistic FAE algorithms resort to additional prior or auxiliary information, such as attribute grouping or identity information BIB032 , to customize their network architectures. Deep FAM methods are mainly constructed based on generative models, of which generative adversarial networks (GANs) BIB006 BIB013 ) and variational autoencoders (VAEs) Huang et al. 2018a, b) serve as the backbones. Furthermore, deep FAM algorithms can be divided into two groups: model-based methods and extra condition-based methods, where the main difference between them is whether extra conditions are introduced. Model-based methods construct a model without any extra conditional inputs and learn a set of model parameters that only correspond to one attribute during a single training process. Thus, when editing another attribute, another training process needs to be executed in the same way. In this case, multiple attribute manipulations correspond to multiple training processes, resulting in expensive computation costs. In contrast, extra condition-based methods take extra attribute vectors or reference images as input conditions, and they can alter multiple attributes simultaneously by changing the corresponding values of attribute vectors or taking multiple exemplars with distinct attributes as references. Specifically, given an original image, an extra conditional attribute vector, such as a one-hot vector indicating the presence of the attribute, is concatenated with the latent original image codes. By comparison, extra conditional reference exemplars exchange specific attributes with the original image in the framework of image-to-image translation. Note that these reference images do not need to have the same identity as the original image. Hence, rather than merely altering the values of attribute vectors to edit facial attributes, attribute transfer based on reference images can discover more specific details of references and yield more faithful facial attribute images BIB020 BIB029 BIB033 ). Due to more abundant facial details and more photorealistic performance of generated images, this type of method has attracted much attention of current researchers. In summary, we create a taxonomy of contemporary deep facial attribute analysis algorithms in a tree diagram in Fig. 2 . Furthermore, aiming to summarize the progress in deep facial attribute analysis, milestones of both deep FAE and FAM methods are listed in Figs. 3 and 4, respectively. As shown in Fig. 3 , part-based FAE methods and holistic FAE methods share two parallel routes. The study of deep FAE can be traced back to the earliest part-based work of BIB007 , who take the whole person images as inputs. Then, LNet+ANet pushes deep FAE into an independent research branch, where only face images are taken as inputs for merely estimating face-related attributes. In addition, two large-scale face datasets, i.e., CelebA and LFWA, with 40 labeled attributes, are released to provide data support for deep FAE methods. Then, part-based and holistic methods share joint development and success but have distinct directions and trends. Part-based methods extremely emphasize facial details for discovering localization cues BIB018 BIB030 , whereas holistic methods incline to employ attribute relationships to cus- tomize networks for learning more discriminative features BIB032 . We outline the development of deep FAM methods in Fig. 4 . Note that, model-based methods and two types of extra condition-based methods have their own evolutionary processes, but all follow the advances in GANs or VAEs. The earliest deep FAM work DIAT ), a model-based method, first attempts to utilize simple GANs to generate facial attributes. Meanwhile, conditional GAN BIB014 ) and VAE BIB011 ) dominate extra condition-based FAM methods by taking attribute vectors as conditions. Though extra attribute vector based methods have the remarkable advantage of changing multiple attributes simultaneously, they cannot guarantee that the remaining details that are irrelevant to manipulated attributes keep unchanged. Model-based methods can overcome this problem, but they cannot manipulate multiple attributes in a single training process. In light of these issues, methods conditioned on reference exemplars come into researchers' attention. They can balance the change of multiple interested attributes and the preservation of other irrelevant attributes; meanwhile, generate more photorealistic facial attribute images. Hence, exemplar-guided FAM methods are becoming a popular research trend. Although a large number of methods achieve appealing performance in deep FAE and FAM methods, there are still several severe challenges for future deep facial attribute analysis. Therefore, we summarize these urgent challenges and analyze possible opportunities in terms of data, algorithms, and applications. The corresponding overview is described in Fig. 5 . First, from the perspective of data, contemporary deep FAE methods suffer from the problem of insufficient training data. The most commonly used two datasets come from celebrities or news , where attribute types, illumination, views, and poses, all have significant differences from real-world data ). Therefore, future deep FAE models would have high demands for diverse data sources and excellent data quality [e.g., video data BIB015 BIB034 ]. Future facial attribute images need to cover more real-world scenarios and wider-range attribute types. In this way, models can better capture representative features that conform to real-world data distribution. In addition, imbalanced data distribution of facial attribute images highlights in two aspects: the attribute category imbalance over a single dataset and the domain gaps between different training and testing datasets. The former called class-imbalance issue makes FAE models bias towards the majority samples and ignore the minority ones, resulting In contrast, the latter called domain adaption issue, which has not been fully explored in current deep FAE algorithms yet, is related to the generalization of models, especially when testing over unseen data. Regarding the data challenges and opportunities in deep FAM, the rapid development of multimedia in the era of big data has given rise to rich video data. However, tracking and annotating facial attributes in videos is difficult because of spatial and temporal dynamics BIB021 . Hence, video attribute manipulation is still a task to be addressed due to the lack of available training data. In addition, a large proportion of current algorithms evaluate the quality of their generated facial attribute images based on the visual fidelity BIB014 BIB029 . Because of the lack of established protocols and standards, such measurements might have adverse effects on the performance evaluation of deep FAM methods. Therefore, it would be a thorny problem to seek unified and standard data metric schemes that achieve both qualitative and quantitative analyses. Second, from the perspective of algorithms, deep partbased FAE methods mainly focus on two aspects. The first is to integrate multiple face-related tasks, such as attribute estimation and face recognition, into a unified framework. In this way, the complementary information among different tasks could be fully exploited to improve all of them. For the second aspect, future part-based FAE algorithms are expected to discover more relationships among different attribute locations to handle in-the-wild data with complex environmental variations. For deep holistic FAE algorithms, current algorithms discover attribute relationships with the help of the prior information, e.g., human-made facial attribute groups. Such artificial partitions would limit the generalization ability of models. Hence, the critical challenge that holistic FAE methods face is to design adaptive attribute partition schemes for automatically exploring attribute relationships during the training processes. With regard to the algorithm challenges and opportunities in deep FAM, model-based methods have a severe drawback: they cannot keep other attribute-irrelevant details unchanged as supervised information only comes from the target images with desired attributes. In terms of extra condition-based FAM methods, on the one hand, attribute vector based algorithms need to work harder to manipulate attributes continuously, where interpolation schemes might be a solution worth considering. On the other hand, future reference exemplar-based algorithms are expected to generate more diverse attribute styles in more faithful and photorealistic face images. Finally, from the perspective of applications in deep FAE, face images with different viewpoints might have different attributes for the same person. It is possible that an attribute shown on the front face is not emphasized on the profile. This is called attribute inconsistency issue. By filtering more confident images to make the prediction ), existing methods might neglect rich information in multiview face images. Therefore, how to keep attributes from the same identity consistent, while taking full advantage of information for capturing features with multiple views are important questions for the future. Second, biometric verification BIB001 BIB005 BIB012 BIB022 BIB042 ) is a developing application for digital mobile devices to resist various attacks in the real world. Compared with full-face based biometric verification BIB012 BIB005 , facial attributes contain more detailed characteristics and can better facilitate active authentication. The main obstacles lie in the following two aspects: the first is to introduce facial attributes into the task of active authentication appropriately and efficiently BIB022 , and the second is to explore the available deep features and classifiers with the trade-off between the verification accuracy and mobile performance. Regarding the application challenges and opportunities in deep FAM, facial makeup BIB035 BIB036 BIB043 ) and aging BIB003 BIB044 BIB045 ) have become hot topics in computer vision. The two tasks focus more on subtle facial details related to makeup and age attributes. Due to promising performance in mobile devices entertainment and identityrelevant verification, they have turned into crucial study branches independent of general deep FAM methods, and have shown significant potentials to facilitate more practical applications BIB037 BIB038 BIB039 ). In addition, contemporary deep FAM research only works well with a limited range of resolutions and under laboratory conditions. On the one hand, such a limitation leads to more difficult high-resolution and low-quality face image manipulation in real-world applications; on the other hand, it provides an opportunity to integrate face super-resolution into attribute manipulation BIB040 in future research. In addition, the relationships between deep FAE and FAM might contribute to improving both tasks. On the one hand, FAM is a vital scheme of data augmentation for FAE, where generated facial attribute images can significantly increase the amount of data to further relieve the overfitting issue. On the other hand, FAE can be a significant quantitative performance evaluation criterion for FAM, where the accuracy gap between real images and generated images can reflect the performance of deep FAM algorithms. In this paper, we conduct an in-depth survey of facial attribute analysis based on deep learning, including FAE and FAM. The primary goal is to provide an overview of the two issues, and to highlight their respective strengths and weaknesses to provide newcomers prime skills. The remainder of this paper is organized as follows. In Sect. 2, we summarize a general two-stage pipeline that deep facial attribute analysis follows, including data preprocessing and model construction. The corresponding preliminary theories are also introduced for both FAE and FAM. In Sect. 3, we list commonly used publicly available facial attribute datasets and metrics. Sections 4 and 5 provide detailed overviews of state-of-the-art deep FAE and FAM methods, as well as their advantages and disadvantages, respectively. Additional related issues, as well as challenges and opportunities, are discussed in Sects. 6 and 7, respectively. Finally, we conclude this paper in Sect. 8.
A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> We have created the first image search engine based entirely on faces. Using simple text queries such as "smiling men with blond hair and mustaches," users can search through over 3.1 million faces which have been automatically labeled on the basis of several facial attributes. Faces in our database have been extracted and aligned from images downloaded from the internet using a commercial face detector, and the number of images and attributes continues to grow daily. Our classification approach uses a novel combination of Support Vector Machines and Adaboost which exploits the strong structure of faces to select and train on the optimal set of features for each attribute. We show state-of-the-art classification results compared to previous works, and demonstrate the power of our architecture through a functional, large-scale face search engine. Our framework is fully automatic, easy to scale, and computes all labels off-line, leading to fast on-line search performance. In addition, we describe how our system can be used for a number of applications, including law enforcement, social networks, and personal photo management. Our search engine will soon be made publicly available. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> We investigate the importance of parts for the tasks of action and attribute classification. We develop a part-based approach by leveraging convolutional network features inspired by recent advances in computer vision. Our part detectors are a deep version of poselets and capture parts of the human body under a distinct set of poses. For the tasks of action and attribute classification, we train holistic convolutional neural networks and show that adding parts leads to top-performing results for both tasks. We observe that for deeper networks parts are less significant. In addition, we demonstrate the effectiveness of our approach when we replace an oracle person detector, as is the default in the current evaluation protocol for both tasks, with a state-of-the-art person detection system. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> Facial attributes are soft-biometrics that allow limiting the search space, e.g., by rejecting identities with non-matching facial characteristics such as nose sizes or eyebrow shapes. In this paper, we investigate how the latest versions of deep convolutional neural networks, ResNets, perform on the facial attribute classification task. We test two loss functions: the sigmoid cross-entropy loss and the Euclidean loss, and find that for classification performance there is little difference between these two. Using an ensemble of three ResNets, we obtain the new state-of-the-art facial attribute classification error of 8.00 % on the aligned images of the CelebA dataset. More significantly, we introduce the Alignment-Free Facial Attribute Classification Technique (AFFACT), a data augmentation technique that allows a network to classify facial attributes without requiring alignment beyond detected face bounding boxes. To our best knowledge, we are the first to report similar accuracy when using only the detected bounding boxes — rather than requiring alignment based on automatically detected facial landmarks — and who can improve classification accuracy with rotating and scaling test images. We show that this approach outperforms the CelebA baseline on unaligned images with a relative improvement of 36.8 %. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> Facial attribute analysis in the real world scenario is very challenging mainly because of complex face variations. Existing works of analyzing face attributes are mostly based on the cropped and aligned face images. However, this result in the capability of attribute prediction heavily relies on the preprocessing of face detector. To address this problem, we present a novel jointly learned deep architecture for both facial attribute analysis and face detection. Our framework can process the natural images in the wild and our experiments on CelebA and LFWA datasets clearly show that the state-of-the-art performance is obtained. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> Humans focus attention on different face regions when recognizing face attributes. Most existing face attribute classification methods use the whole image as input. Moreover, some of these methods rely on fiducial landmarks to provide defined face parts. In this paper, we propose a cascade network that simultaneously learns to localize face regions specific to attributes and performs attribute classification without alignment. First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes. Then multiple part-based networks and a whole-image-based network are separately constructed and combined together by the region switch layer and attribute relation layer for final attribute classification. A multi-net learning method and hint-based model compression is further proposed to get an effective localization model and a compact classification model, respectively. Our approach achieves significantly better performance than state-of-the-art methods on unaligned CelebA dataset, reducing the classification error by 30.9%. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Face Detection and Alignment <s> Face attribute prediction in the wild is important for many facial analysis applications, yet it is very challenging due to ubiquitous face variations. In this paper, we address face attribute prediction in the wild by proposing a novel method, lAndmark Free Face AttrIbute pRediction (AFFAIR). Unlike traditional face attribute prediction methods that require facial landmark detection and face alignment, AFFAIR uses an end-to-end learning pipeline to jointly learn a hierarchy of spatial transformations that optimize facial attribute prediction with no reliance on landmark annotations or pre-trained landmark detectors. AFFAIR achieves this through simultaneously: 1) learning a global transformation which effectively alleviates negative effect of global face variation for the following attribute prediction tailored for each face; 2) locating the most relevant facial part for attribute prediction; and 3) aggregating the global and local features for robust attribute prediction. Within AFFAIR, a new competitive learning strategy is developed that effectively enhances global transformation learning for better attribute prediction. We show that with zero information about landmarks, AFFAIR achieves the state-of-the-art performance on three face attribute prediction benchmarks, which simultaneously learns the face-level transformation and attribute-level localization within a unified framework. <s> BIB008
Before the databases with more facial attribute annotations were released, most of the attribute prediction methods BIB001 BIB003 took whole human images (faces and torsos) as inputs. Only several well-marked facial attributes could be estimated, i.e., smile, gender, and has glasses. However, torso regions contain considerable face-irrelevant information, resulting in redundant computations. Hence, face detection and align- ment become crucial steps to locate face areas for reducing the adverse effects of facial attribute-irrelevant areas. For face detection, first recognize the gender attribute with a HyperFace detector that locates faces and landmarks, and then BIB004 further extend this approach to predict 40 facial attributes simultaneously with the same HyperFace detector. In contrast, BIB001 use a poselet part detector BIB002 to detect different parts corresponding to different poses, where the face is an important part of the whole person image. Compared with the poselet detector operated over conventional features, BIB003 propose a 'deep' version of the poselet, which trains a sliding window detector operated on deep feature pyramids. Specifically, the deep poselet detector divides the human body into three parts (head, torso, and legs) and clusters fiducial key points of each part into many different poselets. However, because all existing face detectors are used to find rough facial parts, facial attributes in more subtle areas, such as eyebrows, cannot be well predicted. For facial alignment, well-aligned face databases with fiducial key points could alleviate the adverse effects of misalignment errors on both FAE and FAM when more specific facial regions of attributes can be located through these key points. The All-in-One Face algorithm can be utilized to obtain fiducial key points and full faces. Based on this algorithm, BIB007 divide a face into 14 segments related to different facial regions, and solve the problem of the attribute prediction in partial face images. BIB001 artificially divide a face into 10 functional parts including hair, forehead, eyebrows, eyes, nose, cheeks, upper lip, mouth, and chin. These facial areas are wide and robust enough to address discrepancies among individual faces, and the geometry characteristics shared by different faces can be well exploited. Recently, researchers have tended to integrate face detection and alignment into the training process of facial attribute analysis. BIB005 take face detection as a special case of general semi-rigid object detection and design joint network architectures to ensure the performance improvement in both face detection and attribute estimation. More importantly, this approach can handle in-the-wild input images with complex illumination and occlusions, and no extra cropping and aligning operations are needed. BIB006 propose a cascade network to locate face regions according to different attributes and perform FAE simultaneously with no need to align faces . BIB008 design an AFFAIR network for learning a hierarchy of spatial transformations and predicting facial attributes without landmarks. In summary, integrating face detection and alignment into the network training process is becoming a beneficial research trend.
A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> Predicting attributes from face images in the wild is a challenging computer vision problem. To automatically describe face attributes from face containing images, traditionally one needs to cascade three technical blocks --- face localization, facial descriptor construction, and attribute classification --- in a pipeline. As a typical classification problem, face attribute prediction has been addressed using deep learning. Current state-of-the-art performance was achieved by using two cascaded Convolutional Neural Networks (CNNs), which were specifically trained to learn face localization and attribute description. In this paper, we experiment with an alternative way of employing the power of deep representations from CNNs. Combining with conventional face localization techniques, we use off-the-shelf architectures trained for face recognition to build facial descriptors. Recognizing that the describable face attributes are diverse, our face descriptors are constructed from different levels of the CNNs for different attributes to best facilitate face attribute prediction. Experiments on two large datasets, LFWA and CelebA, show that our approach is entirely comparable to the state-of-the-art. Our findings not only demonstrate an efficient face attribute prediction approach, but also raise an important question: how to leverage the power of off-the-shelf CNN representations for novel tasks. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> Predicting facial attributes from faces in the wild is very challenging due to pose and lighting variations in the real world. The key to this problem is to build proper feature representations to cope with these unfavourable conditions. Given the success of Convolutional Neural Network (CNN) in image classification, the high-level CNN feature, as an intuitive and reasonable choice, has been widely utilized for this problem. In this paper, however, we consider the mid-level CNN features as an alternative to the high-level ones for attribute prediction. This is based on the observation that face attributes are different: some of them are locally oriented while others are globally defined. Our investigations reveal that the mid-level deep representations outperform the prediction accuracy achieved by the (fine-tuned) high-level abstractions. We empirically demonstrate that the midlevel representations achieve state-of-the-art prediction performance on CelebA and LFWA datasets. Our investigations also show that by utilizing the mid-level representations one can employ a single deep network to achieve both face recognition and attribute prediction. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space, which can be both error-prone and tedious. We propose an automatic approach for designing compact multi-task deep learning architectures. Our approach starts with a thin multi-layer network and dynamically widens it in a greedy manner during training. By doing so iteratively, it creates a tree-like deep architecture, on which similar tasks reside in the same branch until at the top layers. Evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Feature Extraction <s> We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model's inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task. <s> BIB006
Deep convolutional neural networks (CNNs) play significant roles in learning discriminative representations and have achieved attractive performance in deep FAE. In general, arbitrary classical CNN networks, such as VGG BIB001 and ResNet BIB002 , can be used to extract deep facial attribute features. For example, BIB003 directly apply FaceNet and VGG-16 networks to capture attribute features of face images. Considering that the features at different levels of the network might have different effects on the performance of deep FAE methods, BIB004 take mid-level CNN features as an alternative to high-level features. The experiments demonstrate that even early convolution layers achieve comparable performance in most facial attributes with that of state-of-the-art methods, and mid-level representations can yield improved results over high-level abstract features. The reason for this superiority is that mid-level features can break the bounds of the inter-connections between convolutional and fully connected (FC) layers. Consequently, the CNN model can accept arbitrary receptive sizes for capturing rich information of face images. In addition to using or combining classical deep networks, several methods design customized network architectures for learning discriminative features. BIB005 design an automatically constructed compact multi-task architecture, which starts with a thin multi-layer network and dynamically widens in a greedy manner. BIB006 build a hierarchical generative model and a corresponding inference model through the adversarial learning paradigm.
A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Data in vision domain often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary classification methods based on deep convolutional neural network (CNN) typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain both intercluster and inter-class margins. This tighter constraint effectively reduces the class imbalance inherent in the local data neighborhood. We show that the margins can be easily deployed in standard deep learning framework through quintuplet instance sampling and the associated triple-header hinge loss. The representation learned by our approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high-and low-level vision classification tasks that exhibit imbalanced class distribution. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Attribute recognition, particularly facial, extracts many labels for each image. While some multi-task vision problems can be decomposed into separate tasks and stages, e.g., training independent models for each task, for a growing set of problems joint optimization across all tasks has been shown to improve performance. We show that for deep convolutional neural network (DCNN) facial attribute extraction, multi-task optimization is better. Unfortunately, it can be difficult to apply joint optimization to DCNNs when training data is imbalanced, and re-balancing multi-label data directly is structurally infeasible, since adding/removing data to balance one label will change the sampling of the other labels. This paper addresses the multi-label imbalance problem by introducing a novel mixed objective optimization network (MOON) with a loss function that mixes multiple task objectives with domain adaptive re-weighting of propagated loss. Experiments demonstrate that not only does MOON advance the state of the art in facial attribute recognition, but it also outperforms independently trained DCNNs using the same data. When using facial attributes for the LFW face recognition task, we show that our balanced (domain adapted) network outperforms the unbalanced trained network. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Facial attributes are emerging soft biometrics that have the potential to reject non-matches, for example, based on mismatching gender. To be usable in stand-alone systems, facial attributes must be extracted from images automatically and reliably. In this paper, we propose a simple yet effective solution for automatic facial attribute extraction by training a deep convolutional neural network (DCNN) for each facial attribute separately, without using any pre-training or dataset augmentation, and we obtain new state-of-the-art facial attribute classification results on the CelebA benchmark. To test the stability of the networks, we generated adversarial images -- formed by adding imperceptible non-random perturbations to original inputs which result in classification errors -- via a novel fast flipping attribute (FFA) technique. We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not. This result is surprising because no DCNNs tested to date have exhibited robustness to adversarial images without explicit augmentation in the training procedure to account for adversarial examples. Finally, we introduce the concept of natural adversarial samples, i.e., images that are misclassified but can be easily turned into correctly classified images by applying small perturbations. We demonstrate that natural adversarial samples commonly occur, even within the training set, and show that many of these images remain misclassified even with additional training epochs. This phenomenon is surprising because correcting the misclassification, particularly when guided by training data, should require only a small adjustment to the DCNN parameters. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Facial attributes are soft-biometrics that allow limiting the search space, e.g., by rejecting identities with non-matching facial characteristics such as nose sizes or eyebrow shapes. In this paper, we investigate how the latest versions of deep convolutional neural networks, ResNets, perform on the facial attribute classification task. We test two loss functions: the sigmoid cross-entropy loss and the Euclidean loss, and find that for classification performance there is little difference between these two. Using an ensemble of three ResNets, we obtain the new state-of-the-art facial attribute classification error of 8.00 % on the aligned images of the CelebA dataset. More significantly, we introduce the Alignment-Free Facial Attribute Classification Technique (AFFACT), a data augmentation technique that allows a network to classify facial attributes without requiring alignment beyond detected face bounding boxes. To our best knowledge, we are the first to report similar accuracy when using only the detected bounding boxes — rather than requiring alignment based on automatically detected facial landmarks — and who can improve classification accuracy with rotating and scaling test images. We show that this approach outperforms the CelebA baseline on unaligned images with a relative improvement of 36.8 %. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Attribute Classification <s> Data for face analysis often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary deep learning methods typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain inter-cluster margins both within and between classes. This tight constraint effectively reduces the class imbalance inherent in the local data neighborhood, thus carving much more balanced class boundaries locally. We show that it is easy to deploy angular margins between the cluster distributions on a hypersphere manifold. Such learned Cluster-based Large Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster algorithm, shows significant improvements in accuracy over existing methods on both face recognition and face attribute prediction tasks that exhibit imbalanced class distribution. <s> BIB008
Early methods learn feature representations with deep networks but make the prediction with traditional classifiers, such as support vector machines (SVMs) BIB001 BIB002 , decision trees BIB003 , and k-nearest neighbor (kNN) classifier BIB004 BIB008 . For example, Kumar et al. (2009) train multiple SVMs BIB001 with radial basis function (RBF) kernels to predict multiple attributes, where each SVM corresponds to one facial attribute. BIB002 present a feedforward classification system with linear SVMs and classify attributes at the image patch level, the whole image level, and the semantic relationship level. BIB003 construct a sum-product decision tree network to yield facial attribute region locations and classification results simultaneously. BIB004 BIB008 adopt kNN algorithm to solve the class-imbalance attribute estimation problem. In terms of the classifiers based on deep learning, several convolutional layers followed by FC layers constitute a deep attribute classifier, which can be attached to the end of deep feature extraction networks to make the prediction. Then, the specific loss function is used to measure the discrepancy between the outputs of FC layers and the ground truths for reducing classification errors. Below, we introduce two commonly used loss functions for deep FAE models. The most prevalent loss function is the sigmoid crossentropy loss, which makes a binary classification for each attribute . For example, adopt the sigmoid cross-entropy loss to evaluate its network output and calculate the scores of all facial attribute. Besides, BIB005 consider multiple facial attribute classification as a regression issue to minimize the mean squared error (MSE) loss, i.e., the Euclidean loss, by mixing the errors of all attributes. In this way, multiple attribute labels can be obtained simultaneously via a single deep convolutional neural network (DCNN). In contrast, BIB006 also adopt the Euclidean loss but train a set of DCNNs, where each network predicts a facial attribute. Despite higher prediction accuracy that DCNNs achieve for facial attributes, they have the severe problem of high computation and memory costs. To explore the effects of different loss functions on deep facial attribute classifiers, BIB007 test and compare the Euclidean loss and the sigmoid cross-entropy loss. The experiments over the same network but different loss functions demonstrate that the two loss functions are capable of achieving comparable performance for attribute estimation. Therefore, future researchers can choose either of these loss functions according to their tasks with little performance change.
A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> We introduce the use of describable visual attributes for face verification and image search. Describable visual attributes are labels that can be given to an image to describe its appearance. This paper focuses on images of faces and the attributes used to describe them, although the concepts also apply to other domains. Examples of face attributes include gender, age, jaw shape, nose size, etc. The advantages of an attribute-based representation for vision tasks are manifold: They can be composed to create descriptions at various levels of specificity; they are generalizable, as they can be learned once and then applied to recognize new objects or categories without any further training; and they are efficient, possibly requiring exponentially fewer attributes (and training data) than explicitly naming each category. We show how one can create and label large data sets of real-world images to train classifiers which measure the presence, absence, or degree to which an attribute is expressed in images. These classifiers can then automatically label new images. We demonstrate the current effectiveness-and explore the future potential-of using attributes for face verification and image search via human and computational experiments. Finally, we introduce two new face data sets, named FaceTracer and PubFig, with labeled attributes and identities, respectively. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., ‘in the wild’), the ‘YouTube Faces’ database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> The key challenge of face recognition is to develop effective feature representations for reducing intra-personal variations while enlarging inter-personal differences. In this paper, we show that it can be well solved with deep learning and using both face identification and verification signals as supervision. The Deep IDentification-verification features (DeepID2) are learned with carefully designed deep convolutional networks. The face identification task increases the inter-personal variations by drawing DeepID2 features extracted from different identities apart, while the face verification task reduces the intra-personal variations by pulling DeepID2 features extracted from the same identity together, both of which are essential to face recognition. The learned DeepID2 features can be well generalized to new identities unseen in the training data. On the challenging LFW dataset [11], 99.15% face verification accuracy is achieved. Compared with the best previous deep learning result [20] on LFW, the error rate has been significantly reduced by 67%. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Facial Attribute Analysis Datasets <s> Recent research progress in facial attribute recognition has been dominated by small improvements on the only large-scale publicly available benchmark dataset, CelebA [18]. We propose to extend attribute prediction research to unconstrained videos. Applying attribute models trained on CelebA – a still image dataset – to video data highlights several major problems with current models, including the lack of consideration for both time and motion. Many facial attributes (e.g. gender, hair color) should be consistent throughout a video, however, current models do not produce consistent results. We introduce two methods to increase the consistency and accuracy of attribute responses in videos: a temporal coherence constraint, and a motionattention mechanism. Both methods work on weakly labeled data, requiring attribute labels for only one frame in a sequence, which we call the anchor frame. The temporal coherence constraint moves the network responses of non-anchor frames toward the responses of anchor frames for each sequence, resulting in more stable and accurate attribute predictions. We use the motion between anchor and non-anchor video frames as an attention mechanism, discarding the information from parts of the non-anchor frame where no motion occurred. This motion-attention focuses the network on the moving parts of the non-anchor frames (i.e. the face). Since there is no large-scale video dataset labeled with attributes, it is essential for attribute models to be able to learn from weakly labeled data. We demonstrate the effectiveness of the proposed methods by evaluating them on the challenging YouTube Faces video dataset [31]. The proposed motion-attention and temporal coherence methods outperform attribute models trained on CelebA, as well as those fine-tuned on video data. To the best of our knowledge, this paper is the first to address the problem of facial attribute prediction in video. <s> BIB006
We present an overview of publicly available facial attribute analysis datasets for both FAE and FAM, including data sources, sample sizes, and test protocols. More details of these datasets are listed in Table 1 . FaceTracer dataset is an extensive collection of realworld face images collected from the internet. There are 15,000 faces with fiducial key points and 10 groups of attributes, where 7 groups of facial attributes are composed of 19 attribute values, and the remaining 3 groups denote the quality of images and the environment. This dataset provides the URLs of each image for considering privacy and copyright issues. In addition, FaceTracer takes 80% of the labeled data as training data, and the remaining 20% as testing data with 5-fold cross-validation. The Labeled Faces in the Wild (LFW) dataset consists of 13,233 images of cropped, centered frontal faces derived from . This dataset is collected from 5749 people using online news sources, and there are 1680 people that have two or more images. Kumar et al. (2009) first collect 65 attribute labels through Amazon Mechanical Turk (AMT) 1 and then expand to 73 attributes BIB001 . We denote them as LFW-65 and LFW-73 in Table 2 . BIB004 extract 40 attribute labels automatically by binarizing corresponding values of labels in LFW dataset, instead of labeling by manual. Moreover, they annotate 5 fiducial key points, leading to LFWA dataset, which is partitioned into half for training (6263 images) and the remains for testing. PubFig dataset is a large, real-world face dataset containing 58,797 images of 200 people collected from the internet under uncontrolled situations. Thus, this dataset covers considerable variations in poses, lights, expressions, and scenes. PubFig dataset labels 73 facial attributes, as many as LFW-73, but it includes fewer individuals. Besides, this dataset Celeb-Faces Attributes (CelebA) dataset is constructed by labeling images selected from Celeb-Faces BIB003 , which is a large-scale face attribute dataset covering large pose variations and background clutter. There are 10,177 identities, 202,599 face images with 5 landmark locations, and 40 binary attribute annotations per image. In the experiment, CelebA is partitioned into three parts: images of the first 8000 identities (with 160,000 images) for training, images of another 1000 identities (with 20,000 images) for validation and the remains for testing. Berkeley Human Attributes dataset is collected from H3D (Bourdev and Malik 2009) dataset and PASCAL VOC 2010 BIB005 training and validation datasets, containing 8053 images centered on full bodies of persons. There are wide variations in poses, viewpoints, and occlusions. Thus, many existing methods that work on front faces do not perform well on this dataset. AMT is also used to provide labels for all 9 attributes by 5 independent annotators. The dataset partitions 2003 images for training, 2010 for validation and 4022 for testing. Attribute 25K dataset is collected from Facebook, which contains 24,963 people split into 8737 training, 8737 validation and 7489 test examples. Since the images have large variations in viewpoints, poses and occlusions, not every attribute can be inferred from every image. For instance, we cannot label the wearing hat attribute when the head of the person is not visible. Ego-Humans dataset draws images from videos that track casual walkers with the OpenCV frontal face detector and facial landmark tracking in New York City over two months. What makes it different from other datasets is that it covers the location and weather information through clustering GPS coordinates. Moreover, nearly five million face pairs along with their same or not same labels are extracted under the constraints of temporal information and geolocations. BIB005 manually annotate 2714 images with 17 facial attributes randomly selected from these five million images. For the testing protocol, 80% images are selected randomly for training and the remaining for testing. University of Maryland Attribute Evaluation Dataset (UMA-AED) comes from image searches taking 40 attributes as search terms and the HyperFace as face detector . UMD-AED serves as an evaluation dataset and contributes to class-imbalance learning in deep facial attribute estimation. It is composed of 2800 face images labeled with a subset of 40 attributes from CelebA and LFWA. Each attribute has 50 positive and 50 negative samples, which means that not every attribute is tagged in each image. In addition, compared with CelebA containing mostly frontal, high-quality, and posed images, UMD-AED comprises a large number of variations, e.g., distinct image quality, varying lights and poses, wide age ranges, and different skin tones. UMD-AED offers a much more unbiased metric for real-world data, and it can be used to evaluate whether the attribute estimation models have learned discriminative feature representations. YouTube Faces Dataset (with attribute labels) Original YouTube Faces Dataset contains 3245 videos from 1595 celebrities with 620,000 frame images BIB002 ) for face verification. BIB006 further extend it for the video-based facial attribute prediction issue. They label 40 attributes from CelebA in the first of four frames from every video, where the remaining three frames without attribute labels come from one third, two-thirds, and the last of the way per video, respectively. As a result, this dataset makes it possible for exploring deep FAE methods merely with weakly labels. Ten-fold cross-validation is adopted for the protocol. Then, all the testing experiments need to be conducted on the labeled frames of the testing splits with the average of all 10 splits. To provide a more comprehensive overview of all existing attribute labels, we list all the labels in LFW-73 dataset with the maximum number of attributes in Table 2 . Different facial attribute datasets contain different subsets of these attribute annotations for deep FAE and FAM. Note that in Table 2 , There are 40 facial attributes in two commonly used facial attribute analysis datasets, i.e., CelebA and LFWA. The remaining 33 attributes are also labeled and used in Table 2 An overview of facial attributes other datasets, e.g., LFW with 65 attributes mentioned in Table 1 .
A Survey of Deep Facial Attribute Analysis <s> • mean Average Precision (mAP) <s> Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> • mean Average Precision (mAP) <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB002
As there is more than one label in multi-label image classification, the mean Average Precision (mAP) becomes a prevalent metric BIB001 BIB002 ), which computes the average of the precision-recall curve from the recall 0 to recall 1. Moreover, mAP is the mean of Average Precision (AP) for a set of categories, while AP is the more general version that combines the recall and precision to yield prediction results for a single class.
A Survey of Deep Facial Attribute Analysis <s> • Qualitative Metrics <s> Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> • Qualitative Metrics <s> If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5? The answer is probably a No. Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> • Qualitative Metrics <s> The task of face attribute manipulation has found increasing applications, but still remains challeng- ing with the requirement of editing the attributes of a face image while preserving its unique details. In this paper, we choose to combine the Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) for photorealistic image genera- tion. We propose an effective method to modify a modest amount of pixels in the feature maps of an encoder, changing the attribute strength contin- uously without hindering global information. Our training objectives of VAE and GAN are reinforced by the supervision of face recognition loss and cy- cle consistency loss for faithful preservation of face details. Moreover, we generate facial masks to en- force background consistency, which allows our training to focus on manipulating the foreground face rather than background. Experimental results demonstrate our method, called Mask-Adversarial AutoEncoder (M-AAE), can generate high-quality images with changing attributes and outperforms prior methods in detail preservation. <s> BIB003
Statistical survey is the most intuitive way to qualitatively evaluate the quality of generated images in most generative tasks. By establishing specific rules in advance, subjects vote for generated images with appealing visual fidelity, and then, researchers draw conclusions according to the statistical analysis of votes. For example, BIB001 quantitatively evaluate the performance of generated images in a survey format via AMT (see footnote 1). Given an input image, the workers are required to select the best generated images according to instructions based on perceptual realism, quality of manipulation in attributes, and preservation of original identities. Each worker is asked a set number of questions for validating human effort. BIB002 conduct a statistical survey that asks volunteers to choose the better result from their proposed CAAE or existing works. BIB003 instruct volunteers to rank several deep FAM approaches based on perceptual realism, quality of transferred attributes, and preservation of personal features. Then, they calculate the average rank (between 1 and 7) of each approach. Lample et al. (2017) perform a quantitative evaluation on two different aspects: the naturalness measuring the quality of generated images, and the accuracy measuring the degree of swapping an attribute reflected in the generation.
A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the language-model likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English ↔ French translation; especially, by learning from monolingual data (with 10% bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the `Frechet Inception Distance'' (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https://github.com/Prinsphield/ELEGANT. <s> BIB009 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization. <s> BIB010 </s> A Survey of Deep Facial Attribute Analysis <s> • Quantitative Metrics <s> Facial attribute editing aims to manipulate single or multiple attributes on a given face image, i.e., to generate a new face image with desired attributes while preserving other details. Recently, the generative adversarial net (GAN) and encoder–decoder architecture are usually incorporated to handle this task with promising results. Based on the encoder–decoder architecture, facial attribute editing is achieved by decoding the latent representation of a given face conditioned on the desired attributes. Some existing methods attempt to establish an attribute-independent latent representation for further attribute editing. However, such attribute-independent constraint on the latent representation is excessive because it restricts the capacity of the latent representation and may result in information loss, leading to over-smooth or distorted generation. Instead of imposing constraints on the latent representation, in this work, we propose to apply an attribute classification constraint to the generated image to just guarantee the correct change of desired attributes, i.e., to change what you want. Meanwhile, the reconstruction learning is introduced to preserve attribute-excluding details, in other words, to only change what you want. Besides, the adversarial learning is employed for visually realistic editing. These three components cooperate with each other forming an effective framework for high quality facial attribute editing, referred as AttGAN . Furthermore, the proposed method is extended for attribute style manipulation in an unsupervised manner. Experiments on two wild datasets, CelebA and LFW, show that the proposed method outperforms the state-of-the-art on realistic attribute editing with other facial details well preserved. <s> BIB011
Distribution difference measure calculates the differences between real images and generated face images. BIB009 achieve this goal by the Fréchet inception distance BIB008 ) (FID) with the means and covariance matrices of two distributions before and after editing facial attributes. BIB010 compute the peak signal to noise ratio (PSNR) to measure the pixel-level differences. They also calculate the structure similarity index (SSIM) and its multi-scale version MS-SSIM BIB001 ) to estimate the structure distortion and the identity distance. All these measurements contribute to evaluating the high-level similarity of two face images. In addition, BIB011 use an Inception-ResNet BIB005 to train a face recognizer for measuring the identity preservation ability with rank-1 recognition accuracy. Therefore, face identity preservation is becoming a promising metric because it can indicate whether models have excellent performance in preserving facial details outside of manipulated attributes. Facial landmark detection gain uses the accuracy gain of landmark detection before and after attribute editing to evaluate the quality of synthesized images. For example, BIB006 adopt an ERT method BIB003 , which is a landmark detection algorithm trained on 300-W dataset BIB002 . During testing, they divide the test sets into three components: the first containing images with the positive attribute labels, the second containing images with the negative labels, and the last containing the manipulated images from the first part. Then, the average normalized distance error is computed to evaluate the discrepancy of landmarks between the generated images and the ground truths. Facial attribute estimation constructs additional attribute prediction networks to measure the performance of FAM according to the classification accuracy. BIB007 first design an Anet to predict facial attributes on the manipulated face images. If the outputs of the Anet are closer to the desired attribute labels, the generator can be considered to have satisfactory generation performance. BIB004 train a regressor attribute prediction network to calculate the attribute similarity between the conditional attributes and generated attributes. Note that FAE models used for the evaluation are independent of FAM's training processes, which means that they have to be trained well in advance and have base accuracy performance over all facial attributes.
A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> We address the problem of interactive facial feature localization from a single image. Our goal is to obtain an accurate segmentation of facial features on high-resolution images under a variety of pose, expression, and lighting conditions. Although there has been significant work in facial feature localization, we are addressing a new application area, namely to facilitate intelligent high-quality editing of portraits, that brings requirements not met by existing methods. We propose an improvement to the Active Shape Model that allows for greater independence among the facial components and improves on the appearance fitting step by introducing a Viterbi optimization process that operates along the facial contours. Despite the improvements, we do not expect perfect results in all cases. We therefore introduce an interaction model whereby a user can efficiently guide the algorithm towards a precise solution. We introduce the Helen Facial Feature Dataset consisting of annotated portrait images gathered from Flickr that are more diverse and challenging than currently existing datasets. We present experiments that compare our automatic method to published results, and also a quantitative evaluation of the effectiveness of our interactive method. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> In this work, we propose an exemplar-based face image segmentation algorithm. We take inspiration from previous works on image parsing for general scenes. Our approach assumes a database of exemplar face images, each of which is associated with a hand-labeled segmentation map. Given a test image, our algorithm first selects a subset of exemplar images from the database, Our algorithm then computes a nonrigid warp for each exemplar image to align it with the test image. Finally, we propagate labels from the exemplar images to the test image in a pixel-wise manner, using trained weights to modulate and combine label maps from different exemplars. We evaluate our method on two challenging datasets and compare with two face parsing algorithms and a general scene parsing algorithm. We also compare our segmentation results with contour-based face alignment results, that is, we first run the alignment algorithms to extract contour points and then derive segments from the contours. Our algorithm compares favorably with all previous works on all datasets evaluated. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> We investigate the importance of parts for the tasks of action and attribute classification. We develop a part-based approach by leveraging convolutional network features inspired by recent advances in computer vision. Our part detectors are a deep version of poselets and capture parts of the human body under a distinct set of poses. For the tasks of action and attribute classification, we train holistic convolutional neural networks and show that adding parts leads to top-performing results for both tasks. We observe that for deeper networks parts are less significant. In addition, we demonstrate the effectiveness of our approach when we replace an oracle person detector, as is the default in the current evaluation protocol for both tasks, with a state-of-the-art person detection system. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> Attributes are semantically meaningful characteristics whose applicability widely crosses category boundaries. They are particularly important in describing and recognizing concepts where no explicit training example is given, \textit{e.g., zero-shot learning}. Additionally, since attributes are human describable, they can be used for efficient human-computer interaction. In this paper, we propose to employ semantic segmentation to improve facial attribute prediction. The core idea lies in the fact that many facial attributes describe local properties. In other words, the probability of an attribute to appear in a face image is far from being uniform in the spatial domain. We build our facial attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. As a result of this approach, in addition to recognition, we are able to localize the attributes, despite merely having access to image level labels (weak supervision) during training. We evaluate our proposed method on CelebA and LFWA datasets and achieve superior results to the prior arts. Furthermore, we show that in the reverse problem, semantic face parsing improves when facial attributes are available. That reaffirms the need to jointly model these two interconnected tasks. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> Facial expression synthesis with various intensities is a challenging synthesis task due to large identity appearance variations and a paucity of efficient means for intensity measurement. This paper advances the expression synthesis domain by the introduction of a Couple-Agent Face Parsing based Generative Adversarial Network (CAFP-GAN) that unites the knowledge of facial semantic regions and controllable expression signals. Specially, we employ a face parsing map as a controllable condition to guide facial texture generation with a special expression, which can provide a semantic representation of every pixel of facial regions. Our method consists of two sub-networks: face parsing prediction network (FPPN) uses controllable labels (expression and intensity) to generate a face parsing map transformation that corresponds to the labels from the input neutral face, and facial expression synthesis network (FESN) makes the pretrained FPPN as a part of it to provide the face parsing map as a guidance for expression synthesis. To enhance the reality of results, couple-agent discriminators are served to distinguish fake-real pairs in both two sub-nets. Moreover, we only need the neutral face and the labels to synthesize the unknown expression with different intensities. Experimental results on three popular facial expression databases show that our method has the compelling ability on continuous expression synthesis. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Separate Auxiliary Localization based Methods <s> State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces. <s> BIB008
Since facial attributes describe subtle details of face representations based on human vision, locating the positions of facial attributes enforces subsequent feature extractors and attribute classifiers to focus more on attribute-relevant regions. The most intuitive approach is to take existing face part detectors as auxiliaries. Poselet BIB001 BIB002 ) is a valid part detector that describes a part of the human pose under a given viewpoint. Because these parts include evidences from different areas of the body at different scales, complementary information can be learned to benefit attribute prediction. Typically, given a whole person image, poselet detector ) is first used to decompose an image into several image patches, named poselets, under various viewpoints and poses. Then, a PANDA network is proposed to train a set of CNNs for each poselet and the whole image. Then, the features from all these poselets are concatenated to yield final feature representations. Finally, PANDA branches out multiple binary classifiers where each recognizes an attribute by the binary classification. Based on PANDA, BIB005 introduce a deep version of the Poselet detector and build a feature pyramid, where each level computes a prediction score for the corresponding attribute. However, the poselet detector only discovers coarse body parts and cannot explore subtle local details of face images. Considering that the probability of an attribute appearing in a face image is not uniformed in the spatial domain, BIB006 propose employing semantic segmentation as a separate auxiliary localization scheme. They exploit the location cues obtained by semantic segmentation to guide the attention of attribute prediction to the naturally occurring areas of attributes. Specifically, a semantic segmentation network is first designed in an encoder-decoder paradigm and trained over Helen face dataset BIB003 . During this process, the semantic face parsing BIB004 BIB007 ) is performed as an additional task to learn detailed pixel-level location information. After discovering the location cues, the semantic segmentation based pooling (SSP) and gating (SSG) mechanisms are presented to integrate the location information into the attribute estimation. SSP decomposes the activations of the last convolutional layer into different semantic regions and then aggregates those regions that only reside in the same area. Meanwhile, SSG gates the output activations between the convolutional layers and the batch normalization (BN) operation to control the activations of neurons from different semantic regions. In contrast, BIB008 utilize key points to segment faces into several image patches, which is a more straightforward way compared with semantic segmentation. Then, these segments are fed into a set of facial segment networks to extract corresponding feature representations and learn prediction scores, where the whole face image is fed into a full-face network. A global predictor network fuses the features from these segments, and two committee machines merge their scores for the final prediction. Compared with the above methods that search for location clues of attributes directly, resort to synthesized abstraction facial images that contain local facial parts and texture information to achieve the same goal indirectly. A designed GAN is used to generate facial abstraction images before inputting them into a dual-path facial attribute recognition network, where the real original images are together fed into this recognition network. The dual-path network propagates the feature maps from the abstraction sub-network to the real original image sub-network and concatenates the two types of features for the final prediction. Despite the abundant location and textual information that is contained in generated facial abstraction images, the quality of these images may have a significant impact on performance, especially when some attribute related information is lost in image abstraction. Note that all the separated auxiliary localization based deep FAE methods share a common drawback: relying too much on accurate facial landmark localization, face detection, facial semantic segmentation, face parsing, and facial partition schemes. If these localization strategies are imprecise or landmark annotations are unavailable, the performance of the subsequent attribute estimation task would be significantly affected.
A Survey of Deep Facial Attribute Analysis <s> End-to-End Localization Based Methods <s> Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> End-to-End Localization Based Methods <s> Humans focus attention on different face regions when recognizing face attributes. Most existing face attribute classification methods use the whole image as input. Moreover, some of these methods rely on fiducial landmarks to provide defined face parts. In this paper, we propose a cascade network that simultaneously learns to localize face regions specific to attributes and performs attribute classification without alignment. First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes. Then multiple part-based networks and a whole-image-based network are separately constructed and combined together by the region switch layer and attribute relation layer for final attribute classification. A multi-net learning method and hint-based model compression is further proposed to get an effective localization model and a compact classification model, respectively. Our approach achieves significantly better performance than state-of-the-art methods on unaligned CelebA dataset, reducing the classification error by 30.9%. <s> BIB002
Compared with the separate auxiliary localization based methods that locate attribute regions and make the attribute prediction separately and independently, end-to-end localization based methods jointly exploit location cues where facial attributes appear and predict their presence in a unified framework. BIB001 first propose a cascaded deep learning framework for joint face localization and attribute prediction. Specifically, the cascaded CNN is made up of an LNet and an ANet, where the LNet locates the entire face region and the ANet extracts the high-level face representation from the located area. LNet is first pretrained by classifying massive general object categories to ensure excellent generalization capability, and then it is fine-tuned using the image-level attribute tags of training images to learn features for face localization in a weakly supervised manner. Note that the main difference between LNet and separated auxiliary localization based methods is LNet does not require face bounding boxes or landmark annotations. Meanwhile, ANet is first pretrained by classifying massive face identities to handle the complex variations in unconstrained face images, and then it is fine-tuned to extract discriminative facial attribute representations. Furthermore, rather than extracting features patch-by-patch, ANet introduces an interweaved operation with locally shared filters to extract multiple feature vectors in a one-pass feed-forward process. Finally, SVMs are trained over these features to estimate attribute values per attribute, and the terminal prediction is made by averaging all these values for addressing the small misalignment of face localization. The cascaded LNet and ANet framework shows the benefit of pretraining with massive object categories and massive identities in enhancing the feature representation learning. With such customized pretraining schemes and cascaded network architecture, this method exhibits outstanding robustness to backgrounds and face variations. However, coarse entire face regions discovered by LNet cannot be used to explore more local attribute details. Hence, BIB002 propose a cascade network to jointly locate facial attribute-relevant regions and perform attribute classification. Specifically, they first design a face region localization network (FRL) that builds a branch for each attribute to automatically detect a corresponding relevant region. Then, the following parts and whole (PaW) attribute classification network selectively leverages information from all the attribute-relevant regions for the final estimation. Moreover, in terms of the attribute classification, Ding et al. define two FC layers: the region switch layer (RSL) and the attribute relation layer (ARL). The former selects the relevant prediction sub-network and the latter models attribute relationships. In summary, the cascaded FRL and PaW model not only discovers semantic attribute regions but also explores rich relationships among facial attributes. Besides, since this model automatically detects face regions, it can achieve outstanding performance on unaligned datasets without any pre-alignment step. Note that FRL-PaW method learns a location for each attribute, which makes the training process redundant and time-consuming. This is because several facial attributes generally exist in the same area. However, to the best of our knowledge, there is currently no specific solution for tackling this issue. We expect that future research would reduce computation costs; meanwhile, make the prediction according to attribute locations as accurately as possible. In summary, part-based deep FAE methods first locate the positions where facial attributes appear. Two strategies can be adopted: separate auxiliary localization and end-toend localization. The former leverages existing part detectors or auxiliary localization-related algorithms, and the latter jointly exploits the locations in which facial attributes exist and predicts their presences. Compared with the separate auxiliary localization based methods operating separately and independently, end-to-end localization based methods locate and predict in a unified framework. After obtaining the location clues, features corresponding to certain attribute areas can be extracted and further be fed into attribute clas-sifiers to make the estimation. Recently, researchers are currently more inclined to shift their focus on holistic FAE algorithms when the part-based counterparts are generally distracted and affected by attribute localization mechanisms.
A Survey of Deep Facial Attribute Analysis <s> Holistic Deep FAE Methods <s> Predicting facial attributes from faces in the wild is very challenging due to pose and lighting variations in the real world. The key to this problem is to build proper feature representations to cope with these unfavourable conditions. Given the success of Convolutional Neural Network (CNN) in image classification, the high-level CNN feature, as an intuitive and reasonable choice, has been widely utilized for this problem. In this paper, however, we consider the mid-level CNN features as an alternative to the high-level ones for attribute prediction. This is based on the observation that face attributes are different: some of them are locally oriented while others are globally defined. Our investigations reveal that the mid-level deep representations outperform the prediction accuracy achieved by the (fine-tuned) high-level abstractions. We empirically demonstrate that the midlevel representations achieve state-of-the-art prediction performance on CelebA and LFWA datasets. Our investigations also show that by utilizing the mid-level representations one can employ a single deep network to achieve both face recognition and attribute prediction. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Holistic Deep FAE Methods <s> Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space, which can be both error-prone and tedious. We propose an automatic approach for designing compact multi-task deep learning architectures. Our approach starts with a thin multi-layer network and dynamically widens it in a greedy manner during training. By doing so iteratively, it creates a tree-like deep architecture, on which similar tasks reside in the same branch until at the top layers. Evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Holistic Deep FAE Methods <s> In this paper, we study the face attribute learning problem by considering the identity information and attribute relationships simultaneously. In particular, we first introduce a Partially Shared Multi-task Convolutional Neural Network (PS-MCNN), in which four Task Specific Networks (TSNets) and one Shared Network (SNet) are connected by Partially Shared (PS) structures to learn better shared and task specific representations. To utilize identity information to further boost the performance, we introduce a local learning constraint which minimizes the difference between the representations of each sample and its local geometric neighbours with the same identity. Consequently, we present a local constraint regularized multitask network, called Partially Shared Multi-task Convolutional Neural Network with Local Constraint (PS-MCNN-LC), where PS structure and local constraint are integrated together to help the framework learn better attribute representations. The experimental results on CelebA and LFWA demonstrate the promise of the proposed methods. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Holistic Deep FAE Methods <s> Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability. <s> BIB004
In contrast to part-based FAE approaches that detect and utilize facial components, holistic deep FAE methods focus more on exploring the attribute relationships and extracting features from entire face images rather than facial parts. A schematic diagram of holistic FAE models is provided in Fig. 8 . As shown in Fig. 8 , the key to modeling attribute relationships is learning common features at low-level shared layers and capturing attribute-specific features at high-level separated layers. Each separated layer corresponds to an attribute group. In general, these attribute groups are obtained manually according to semantics or attribute locations. By assigning different shared layers and attribute-specific layers, complementary information among multiple attributes can be discovered such that more discriminative features can be learned for the following attribute classifiers. In general, there are two crucial issues that holistic deep FAE methods need to address when designing network architectures: (1) how to properly assign shared information and attribute-specific information at different layers of networks, and (2) how to explore relationships among facial attributes for learning more discriminative features. Taking these two problems as the main focus, we provide a brief review of holistic FAE methods in the following parts. To the best of our knowledge, MOON ) is one of the earliest holistic FAE methods with the multi-task framework. It has a mixed objective optimization network that learns multiple attribute labels simultaneously via a single DCNN. MOON takes deep FAE as a regression problem for the first time and adopts a 16-layer VGG network as the backbone network, in which abstract high-level features are shared before the last FC layer. Multiple prediction scores are calculated with the MSE loss to reduce the regression error. Similarly, BIB001 replace the high-level CNN features in MOON with mid-level features to identify the best representation for each attribute. In contrast to splitting networks at the last FC layer, the multi-task deep CNN (MCNN) ) branches out to multiple groups at the mid-level convolutional layers for modeling the attribute correlations. Specifically, based on the assumption that many attributes are strongly correlated, MCNN divides all 40 attributes into 9 groups according to semantics, i.e., gender, nose, mouth, eyes, face, around head, facial hair, cheeks, and fat. For example, big nose and pointy nose are grouped into the 'nose' category, and big lips, lipstick, mouth slightly open and smiling are clustered into the 'mouth' category. Therefore, each group consists of similar attributes and learns high-level features independently. At the first two convolutional layers of MCNN, features are shared by all attributes. Then, MCNN branches out several forks corresponding to different attribute groups. That means each attribute group occupies a fork. At the end of the network, an FC layer is added to create a two-layer auxiliary network (AUX) to facilitate attribute relationships. AUX receives the scores from the trained MCNN and yields the final prediction results. Hence, MCNN-AUX models facial attribute relationships in three ways: (1) sharing the lowest layers for all attributes, (2) assigning the higher layers for spatially related attributes, and (3) discovering score-level relationships with the AUX network. However, MCNN has a significant limitation: shared information at low-level layers may vanish after network splitting. One solution to overcome this limitation is jointly learning shared and attribute-specific features at the same level rather than in order of precedence. Therefore, BIB003 design a partially shared structure based on MCNN, i.e., PS-MCNN. It divides all 40 attributes into 4 groups according to attribute positions, i.e., upper group, middle group, lower group, and whole image group. Note that the entire partition process is performed by hand, and this artificial grouping strategy can be regarded as the prior information based on human knowledge. The partially shared structure connects four attribute-specific networks (TSNets) corresponding to four different groups of attributes and one shared network (SNet) sharing features among all the attributes. Specifically, each TSNet learns features for a specific group of attributes. Meantime, SNet shares informative features with each task. In terms of the connection mode between the SNet and the TSNets, each layer of SNet receives additional inputs from the previous layers of TSNet. Then, features from SNet are fed into the next layers of shared and attribute-specific networks. At a certain level of PS-MCNN, both task-specific features and shared features are captured in different branches. In addition, shared features at a specific layer are closely related to the features of all of its previous layers. This connection mechanism contributes to informatively shared feature representations. Apart from attribute correlations, BIB004 introduce the concept of attribute heterogeneity. They note that individual attributes could be heterogeneous concerning data type and scale, as well as semantic meaning. In terms of data type and scale, attributes can be grouped into ordinal versus nominal attributes. For instance, if attributes age and hair length are ordinal, then attributes gender and race are nominal. Note that the main difference between ordinal and nominal attributes is ordinal attributes have an explicit ordering of their variables, whereas nominal attributes generally have two or more classes and there is no intrinsic ordering among the categories. In terms of semantic meaning, attributes such as age, gender, and race are used to describe the characteristics of the whole face, and pointy nose and big lips are mainly used to describe the local characteristics of facial components. Therefore, these two categories of attributes are heterogeneous and can be grouped into holistic versus local attributes for the prediction of different parts of a face image. Therefore, taking both the attribute correlation and heterogeneity into consideration, Han et al. design a deep multi-task learning (DMTL) CNN to learn shared features of all attributes and category-specific features of heterogeneous attributes. The shared feature learning naturally exploits the relationship among attributes to yield discriminative feature representations, whereas the category-specific feature learning aims to fine-tune the shared features towards the optimal estimation of each heterogeneous attribute category. Note that existing multi-task learning methods make no distinction between low-level and mid-level features for different attributes. This is unreasonable because features at different levels of the network may have different relationships. Besides, the above methods share features across tasks and split layers that encode attribute-specific features by hand-designed network architectures. Such a manual exploration in the space of possible multi-task deep architectures is tedious and error-prone because possible spaces might be combinatorially large. In light of this issue, BIB002 present the automatic design of compact multi-task deep learning architectures, with no need to artificially discover possible multi-task architectures. The proposed network learns shared features in a fully adaptive way, where the core idea is incrementally widening the current design in a layer-wise manner. During the training process, the adaptive network starts with a thin multi-layer network (VGG16) and dynamically widens via a top-down layer-wise model widening strategy . It decides with whom each task shares features in each layer, yielding corresponding branches in this layer. Finally, the number of branches at the last layer of the model is equal to that of the attribute categories to be predicted. Consequently, this training scheme considers both task correlations and the complexity of the model for facilitating task grouping decisions at each layer of the network. Therefore, the fully-adaptive network allows us to estimate multiple facial attributes in a dynamic branching procedure through its selfconstructed architecture and feature sharing strategy. To summarize, holistic methods take the entire face images as inputs and mainly work on exploring attribute relationships. Many methods design various network architectures to model the correlations among different attributes. The key to this idea is learning shared features at low-level layers and attribute-specific features at high-level layers. Thus, holistic FAE methods need to address two main problems: one is assigning different layers for learning corresponding features with different characteristics, and another is learning more discriminative features though discovering attribute relationships under customized networks. What can be observed from contemporary research is that attribute grouping by hand has become a prevalent scheme in holistic FAE. We expect that an automatic attribute grouping strategy would attract more attention in future work, and it should adaptively learn proper group partition criteria and adjust them according to models' performance during the training.
A Survey of Deep Facial Attribute Analysis <s> Model-Based Deep FAM Methods <s> This paper presents a Deep convolutional network model for Identity-Aware Transfer (DIAT) of facial attributes. Given the source input image and the reference attribute, DIAT aims to generate a facial image that owns the reference attribute as well as keeps the same or similar identity to the input image. In general, our model consists of a mask network and an attribute transform network which work in synergy to generate a photo-realistic facial image with the reference attribute. Considering that the reference attribute may be only related to some parts of the image, the mask network is introduced to avoid the incorrect editing on attribute irrelevant region. Then the estimated mask is adopted to combine the input and transformed image for producing the transfer result. For joint training of transform network and mask network, we incorporate the adversarial attribute loss, identity-aware adaptive perceptual loss, and VGG-FACE based identity loss. Furthermore, a denoising network is presented to serve for perceptual regularization to suppress the artifacts in transfer result, while an attribute ratio regularization is introduced to constrain the size of attribute relevant region. Our DIAT can provide a unified solution for several representative facial attribute transfer tasks, e.g., expression transfer, accessory removal, age progression, and gender transfer, and can be extended for other face enhancement tasks such as face hallucination. The experimental results validate the effectiveness of the proposed method. Even for the identity-related attribute (e.g., gender), our DIAT can obtain visually impressive results by changing the attribute while retaining most identity-aware features. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Model-Based Deep FAM Methods <s> Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Model-Based Deep FAM Methods <s> Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas. <s> BIB003
Model-based methods map an image in the source domain to the target domain and then distinguish the generated target distribution with the real target distribution under the constraint of an adversarial loss. Therefore, model-based methods are greatly task-specific and have excellent performance in yielding photorealistic facial attribute images. BIB001 first propose a DIAT model following the standard paradigm of model-based methods. DIAT takes unedited images as inputs to generate target facial images with an adversarial loss and an identity loss. The first loss ensures to obtain desired attributes, and the second encourages the generated images to have the same or similar identity as the input images. BIB002 add an inverse mapping from the target domain to the source domain based on DIAT and propose a CycleGAN, where the two mappings are coupled with a cycle consistency loss. This design is based on the intuition that if we translate from one domain to the other and back again, we should arrive where we start. Based on CycleGAN, propose a UNIT model that maps the pair of corresponding images in the source and the target domains to the same latent representation in a shared latent space. Each branch from one of the domains to the latent space performs an analogous CycleGAN operation. However, all of the above methods directly operate on the entire face image. That means when a certain attribute is edited, the other relevant attributes may also be changed uncontrollably. Therefore, to modify attribute-specific face areas and keep the other parts unchanged, BIB003 present learning residual images, which are defined as the difference between images before and after attribute manipulation. In this way, face attributes can be efficiently manipulated with modest pixel modification over the attribute-specific regions. They design a ResGAN consisting of two image transformation networks and a discriminative network to learn residual representations of desired attributes. Specifically, two image transformation networks, denoted as G 0 and G 1 , first take two images with opposite attributes as inputs in turn and then perform the inverse attribute manipulation operation for outputting residual images. Subsequently, the obtained residual images are added to the original input images, yielding the final outputs with manipulated attributes. In the end, all these images, i.e., the two original input images and the two images from the transformation networks, are fed into the discriminative network, which classifies these images into three categories: images generated from the two transformation networks, original images with positive attribute labels, and original images with negative attribute labels. Note that G 0 and G 1 constitute a dual learning cycle. Given an image with a negative attribute label, G 0 synthesizes the desired attribute, and G 1 removes the corresponding attribute that is generated by G 1 . Then, G 1 's output is expected to have the same attribute label as the original given image. The experiments demonstrate that such a dual learning process is beneficial for the generation of high-quality images, and residual images could enforce the attribute manipulation process to focus on the local areas where attributes show up. Therefore, ResGAN is able to generate attractive images especially on local facial attributes. However, model-based methods can only edit an attribute during a training process with a set of corresponding model parameters. The whole manipulation is only supervised by discriminating real or generated images with the adversarial loss. That means when multiple attributes need to be changed, multiple training processes are inevitable, resulting in significant time consumption and computation costs. In contrast, manipulating facial attributes with extra conditions is a more prevalent approach since multiple attributes can be edited through a single training process. Hence, extra condition-based methods attract more attention from researchers, where extra attribute vectors and reference exemplars are taken as input conditions. Specifically, Table 4 State-of-the-art facial attribute manipulation approaches
A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> This paper investigates a problem of generating images from visual attributes. Given the prevalent research for image recognition, the conditional image generation problem is relatively under-explored due to the challenges of learning a good generative model and handling rendering uncertainties in images. To address this, we propose a variety of attribute-conditioned deep variational auto-encoders that enjoy both effective representation learning and Bayesian modeling, from which images can be generated from specified attributes and sampled latent factors. We experiment with natural face images and demonstrate that the proposed models are capable of generating realistic faces with diverse appearance. We further evaluate the proposed models by performing attribute-conditioned image progression, transfer and retrieval. In particular, our generation method achieves superior performance in the retrieval experiment against traditional nearest-neighbor-based methods both qualitatively and quantitatively. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5? The answer is probably a No. Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space. As a result, after training, our model can generate different realistic versions of an input image by varying the attribute values. By using continuous attribute values, we can choose how much a specific attribute is perceivable in the generated image. This property could allow for applications where users can modify an image using sliding knobs, like faders on a mixing console, to change the facial expression of a portrait, or to update the color of some objects. Compared to the state-of-the-art which mostly relies on training adversarial networks in pixel space by altering attribute values at train time, our approach results in much simpler training schemes and nicely scales to multiple attributes. We present evidence that our model can significantly change the perceived value of the attributes while preserving the naturalness of images. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. ::: Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Face attribute editing aims at editing the face image with the given attribute. Most existing works employ Generative Adversarial Network (GAN) to operate face attribute editing. However, these methods inevitably change the attribute-irrelevant regions, as shown in Fig. 1. Therefore, we introduce the spatial attention mechanism into GAN framework (referred to as SaGAN), to only alter the attribute-specific region and keep the rest unchanged. Our approach SaGAN consists of a generator and a discriminator. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-specific region which restricts the alternation of AMN within this region. The discriminator endeavors to distinguish the generated images from the real ones, and classify the face attribute. Experiments demonstrate that our approach can achieve promising visual results, and keep those attribute-irrelevant regions unchanged. Besides, our approach can benefit the face recognition by data augmentation. <s> BIB009 </s> A Survey of Deep Facial Attribute Analysis <s> Extra Condition-Based Deep FAM Methods <s> Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https://github.com/Prinsphield/ELEGANT. <s> BIB010
Deep FAM methods conditioned on extra attribute vectors alter desired attributes with given conditional attribute vectors, such as one-hot vectors indicating the presence of corresponding facial attributes. During the training process, the conditional vectors are concatenated with the to-bemanipulated images in latent encoding spaces. Moreover, conditional generative frameworks dominate the model construction of deep FAM. Various efforts have been made to edit facial attributes based on autoencoders (AEs), VAEs, and GANs. BIB005 propose a conditional adversarial autoencoder (CAAE) for age progression and regression. CAAE first maps a face image to a latent vector through an encoder. Then, the obtained latent vector concatenated with an age label vector is fed into a generator for learning a face manifold. The age label condition controls altering the age. Meanwhile, the latent vector ensures that the personalized face features are preserved. BIB002 introduce a conditional variational autoencoder (CVAE) to generate images from visual attributes. CVAE disentangles an image into the foreground and the background parts, where each part is combined with the defined attribute vector. Consequently, the quality of generated complex images can be significantly improved when the foreground areas attract more attention. BIB004 propose an invertible conditional GAN (IcGAN) to edit multiple facial attributes with determined specific representations of generated images. Given an input image, IcGAN first learns a representation consisting of a latent variable and a conditional vector via an encoder. Then, IcGAN modifies the latent variable and conditional vector to regenerate the original input image through the conditional GAN BIB001 . In this way, by changing the encoded conditional vector, IcGAN can achieve arbitrary attribute manipulation. Apart from autoencoders, VAEs, GANs, and their variants, BIB003 combine the VAE and the GAN into a unified generative model, VAE/GAN. In this model, the GAN discriminator learns feature representations taken as the basis of the VAE reconstruction objective, which means that the VAE decoder and the GAN generator are collapsed into one by sharing parameters and joint training. Hence, this model consists of three parts: the encoder, the decoder, and the discriminator. By concatenating attribute vectors with features from these three components, VAE/GAN performs better than either plain VAEs or GANs. Recently, taking the multiple attribute manipulation as a domain transfer task, BIB006 propose a StarGAN to learn mappings among multiple domains with only a single generator and a discriminator trained from all domains. Each domain corresponds to an attribute and the domain information can be denoted by one-hot vectors. Specifically, the discriminator first distinguishes the real and the fake images and classifies the real images to their corresponding domains. Then, the generator is trained to translate an input image into an output image conditioned on a target domain label vector, which is generated randomly. As a result, the generator is capable of translating the input image flexibly. In summary, StarGAN takes the domain labels as extra supervision conditions. This operation makes it possible to incorporate multiple datasets containing different types of labels simultaneously. However, all the above methods edit multiple facial attributes simultaneously by discretely changing multiple values of attribute vectors. None of them can alter facial attributes continuously. In light of this, BIB007 present a Fader network using continuous attribute values to modify attributes through sliding knobs, like faders on a mixing console. For example, one can gradually change the values of gender to control the transition process from man to woman. Fader network is composed of three components: an encoder, a decoder, and a discriminator. With an image-attribute pair as the input, Fader network first maps the image to the latent representation by its encoder and predicts the attribute vector by its discriminator. Then, the decoder reconstructs the image through the learned latent representation and the attribute vector. During testing, the discriminator is discarded, and different images with various attributes can be generated with different attribute values. Note that all the above methods edit attributes over the whole face images. Hence, attribute-irrelevant details might also be changed. To address this issue, BIB009 introduce the spatial attention mechanism into GANs to locate attribute-relevant areas and propose a SaGAN for manipulating facial attributes more precisely. SaGAN follows the standard adversarial learning paradigm, where a generator and a discriminator play a min-max game. To keep attribute-irrelevant regions unchanged, SaGAN's generator consists of an attribute manipulation network (AMN) and a spatial attention network (SAN). Given a face image, SAN learns a spatial attention mask where attribute-relevant regions have non-zero attention values. In this way, the region where the desired attribute appears can be located. Then, AMN takes the face image and the attribute vector as inputs, yielding an image with the desired attribute in the specific region located by SAN. Rather than taking the attribute vectors as extra conditions, deep FAM methods conditioned on reference exemplars consider exchanging specific attributes with the tobe-manipulated images in the image-to-image translation framework. Note that these reference images do not need to have the same identity as the original to-be-manipulate images, and all the generated attributes are present in the real world. In this way, more specific details that appear in the reference images can be explored to generate more realistic images. BIB008 first design a GeneGAN to achieve the basic reference exemplar-based facial attribute manipulation. Given an image, it is encoded into two complement codes: attribute-specific codes and attribute-irrelevant codes. By exchanging the attribute-specific codes and preserving the attribute-irrelevant codes, desired attributes can be transferred from the reference exemplar image to the to-bemanipulated image. Considering that GeneGAN only transfers one attribute in a single manipulation process, BIB010 construct an ELEGANT model to exchange latent encodings for transferring multiple facial attributes by exemplars. Specifically, since all the attributes are encoded in the latent space in a disentangled manner, one can exchange the specific part of encodings and manipulate several attributes simultaneously. Besides, the residual image learning and the multi-scale discriminators for adversarial training enable the proposed model to generate high-quality images with more delicate details and fewer artifacts. At the beginning of training, ELEGANT receives two sets of training images as inputs, i.e., a positive set and a negative set, which do not need to be paired. Second, an encoder is utilized to obtain the latent encodings of both positive and negative images. Then, if the i-th attribute is required to be transferred, the only step is to exchange the i-th element in the latent encodings of positive and negative images. Once the encoding step is finished, ELEGANT constructs an image generator that consists of a decoder and the encoder from the previous step to decode recombined latent encodings into images. Finally, two discriminators with identical network structures work at different scales to obtain manipulated attribute images.
A Survey of Deep Facial Attribute Analysis <s> Imbalance Learning in Facial Attribute Analysis <s> Data in vision domain often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary classification methods based on deep convolutional neural network (CNN) typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain both intercluster and inter-class margins. This tighter constraint effectively reduces the class imbalance inherent in the local data neighborhood. We show that the margins can be easily deployed in standard deep learning framework through quintuplet instance sampling and the associated triple-header hinge loss. The representation learned by our approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high-and low-level vision classification tasks that exhibit imbalanced class distribution. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Imbalance Learning in Facial Attribute Analysis <s> Recognising detailed facial or clothing attributes in images of people is a challenging task for computer vision, especially when the training data are both in very large scale and extremely imbalanced among different attribute classes. To address this problem, we formulate a novel scheme for batch incremental hard sample mining of minority attribute classes from imbalanced large scale training data. We develop an end-to-end deep learning framework capable of avoiding the dominant effect of majority classes by discovering sparsely sampled boundaries of minority classes. This is made possible by introducing a Class Rectification Loss (CRL) regularising algorithm. We demonstrate the advantages and scalability of CRL over existing state-of-the-art attribute recognition and imbalanced data learning models on two large scale imbalanced benchmark datasets, the CelebA facial attribute dataset and the X-Domain clothing attribute dataset. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Imbalance Learning in Facial Attribute Analysis <s> Data for face analysis often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary deep learning methods typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain inter-cluster margins both within and between classes. This tight constraint effectively reduces the class imbalance inherent in the local data neighborhood, thus carving much more balanced class boundaries locally. We show that it is easy to deploy angular margins between the cluster distributions on a hypersphere manifold. Such learned Cluster-based Large Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster algorithm, shows significant improvements in accuracy over existing methods on both face recognition and face attribute prediction tasks that exhibit imbalanced class distribution. <s> BIB003
Face attribute data exhibits an imbalanced distribution in terms of different categories. It is normally called the classimbalance issue, which means in a dataset, some of the facial attribute classes have a much higher number of samples than others, corresponding to the majority class and minority class ), respectively. For example, the largest imbalance ratio between the minority and majority attributes in CelebA dataset is 1:43. Learning from such imbalanced facial attribute labels can lead to biased classifiers, which tend to favor the majority and fail to discriminate the features learned from the minority. Even in the extreme case, the learned classifiers can hardly identify the minority samples. One typical scheme to solve this problem is using an assumed balanced target distribution to guide the imbalanced source distribution by weighting objective functions. MOON ) weights the back-propagation error in a cost-sensitive way. A probability is assigned to each class by counting the relative numbers of positive and negative samples for both source and target domains. Then, these probabilities could be used as weights to incorporate the distribution discrepancy into the loss function. However, MOON overlooks the label imbalance over each batch, which means that the batch-wise training scheme of deep networks is not fully utilized. In light of this, AttCNN ) proposes a selective learning algorithm to address the distribution discrepancy at the batch level. If the original batch in the source domain has more positive samples and fewer negative samples than the target distribution, the selective learning algorithm resamples a random subset from the positive instances. Meanwhile, it proportionally weights the negative counterparts to match the target distribution. By aligning the distributions between the source and target domains in each batch, AttCNN yields the state-of-the-art class-imbalance attribute prediction performance. In addition, another more frequently used scheme for class-imbalance learning is data resampling for deep FAE methods. BIB001 adopt the resampling strategy, namely large margin local embedding (LMLE), and formulate a quintuple sampling term associated with the triple-header loss. LMLE enforces the preservation of locality across clusters and the discrimination between classes. Then, a fast cluster-wise kNN algorithm is executed, followed by a local large margin decision. In this way, LMLE learns embedded features that are discriminative enough without any possible local class imbalance. On this basis, BIB003 further propose a rectified version of LMLE, i.e., cluster-based large margin local embedding (CLMLE). CLMLE designs a loss to preserve the intercluster margins both within and between classes. In contrast to LMLE enforcing the Euclidean distance on a hypersphere manifold, CLMLE adopts angular margins enforced between the involved cluster distributions and uses spherical k-means for obtaining K clusters with the same size, which contributes to better performance. On the other hand, BIB002 take an online regularization strategy to address the facial attribute based class-imbalance issue. In detail, they exploit a batch-wise incremental hard mining on minority attribute classes, and formulate a class rectification loss (CRL) based on the mined minority examples. For the hard mining strategy, researchers first provide the profiles of hard positives and hard negatives for the minority. Then, according to the predefined profiles and model, they select K hard positives (or hard negatives) as the bottom-K (or top-K) scores on the minority class for a specific attribute. This process is executed at the batch level and incrementally over subsequent batches. Such batch-wise incremental hard mining guarantees CRL strong class-imbalance learning ability and satisfactory attribute estimation performance.
A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> Human-nameable visual “attributes” can benefit various recognition tasks. However, existing techniques restrict these properties to categorical labels (for example, a person is ‘smiling’ or not, a scene is ‘dry’ or not), and thus fail to capture more general semantic relationships. We propose to model relative attributes. Given training data stating how object/scene categories relate according to different attributes, we learn a ranking function per attribute. The learned ranking functions predict the relative strength of each property in novel images. We then build a generative model over the joint space of attribute ranking outputs, and propose a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’). We further show how the proposed relative attributes enable richer textual descriptions for new images, which in practice are more precise for human interpretation. We demonstrate the approach on datasets of faces and natural scenes, and show its clear advantages over traditional binary attribute prediction for these new tasks. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> Effective reduction of false alarms in large-scale video surveillance is rather challenging, especially for applications where abnormal events of interest rarely occur, such as abandoned object detection. We develop an approach to prioritize alerts by ranking them, and demonstrate its great effectiveness in reducing false positives while keeping good detection accuracy. Our approach benefits from a novel representation of abandoned object alerts by relative attributes, namely static ness, foreground ness and abandonment. The relative strengths of these attributes are quantified using a ranking function[19] learnt on suitably designed low-level spatial and temporal features. These attributes of varying strengths are not only powerful in distinguishing abandoned objects from false alarms such as people and light artifacts, but also computationally efficient for large-scale deployment. With these features, we apply a linear ranking algorithm to sort alerts according to their relevance to the end-user. We test the effectiveness of our approach on both public data sets and large ones collected from the real world. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> The notion of relative attributes as introduced by Parikh and Grauman (ICCV, 2011) provides an appealing way of comparing two images based on their visual properties (or attributes) such as "smiling" for face images, "naturalness" for outdoor images, etc. For learning such attributes, a Ranking SVM based formulation was proposed that uses globally represented pairs of annotated images. In this paper, we extend this idea towards learning relative attributes using local parts that are shared across categories. First, instead of using a global representation, we introduce a part-based representation combining a pair of images that specifically compares corresponding parts. Then, with each part we associate a locally adaptive "significance-coefficient" that represents its discriminative ability with respect to a particular attribute. For each attribute, the significance-coefficients are learned simultaneously with a max-margin ranking model in an iterative manner. Compared to the baseline method, the new method is shown to achieve significant improvement in relative attribute prediction accuracy. Additionally, it is also shown to improve relative feedback based interactive image search. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> We present a weakly-supervised approach that discovers the spatial extent of relative attributes, given only pairs of ordered images. In contrast to traditional approaches that use global appearance features or rely on keypoint detectors, our goal is to automatically discover the image regions that are relevant to the attribute, even when the attribute's appearance changes drastically across its attribute spectrum. To accomplish this, we first develop a novel formulation that combines a detector with local smoothness to discover a set of coherent visual chains across the image collection. We then introduce an efficient way to generate additional chains anchored on the initial discovered ones. Finally, we automatically identify the most relevant visual chains, and create an ensemble image representation to model the attribute. Through extensive experiments, we demonstrate our method's promise relative to several baselines in modeling relative attributes. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> We propose an end-to-end deep convolutional network to simultaneously localize and rank relative visual attributes, given only weakly-supervised pairwise image comparisons. Unlike previous methods, our network jointly learns the attribute’s features, localization, and ranker. The localization module of our network discovers the most informative image region for the attribute, which is then used by the ranking module to learn a ranking model of the attribute. Our end-to-end framework also significantly speeds up processing and is much faster than previous methods. We show state-of-the-art ranking results on various relative attribute datasets, and our qualitative localization results clearly demonstrate our network’s ability to learn meaningful image patches. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> Face attribute prediction in the wild is important for many facial analysis applications, yet it is very challenging due to ubiquitous face variations. In this paper, we address face attribute prediction in the wild by proposing a novel method, lAndmark Free Face AttrIbute pRediction (AFFAIR). Unlike traditional face attribute prediction methods that require facial landmark detection and face alignment, AFFAIR uses an end-to-end learning pipeline to jointly learn a hierarchy of spatial transformations that optimize facial attribute prediction with no reliance on landmark annotations or pre-trained landmark detectors. AFFAIR achieves this through simultaneously: 1) learning a global transformation which effectively alleviates negative effect of global face variation for the following attribute prediction tailored for each face; 2) locating the most relevant facial part for attribute prediction; and 3) aggregating the global and local features for robust attribute prediction. Within AFFAIR, a new competitive learning strategy is developed that effectively enhances global transformation learning for better attribute prediction. We show that with zero information about landmarks, AFFAIR achieves the state-of-the-art performance on three face attribute prediction benchmarks, which simultaneously learns the face-level transformation and attribute-level localization within a unified framework. <s> BIB009 </s> A Survey of Deep Facial Attribute Analysis <s> Relative Attribute Ranking in Facial Attribute Analysis <s> A sizable body of work on relative attributes provides evidence that relating pairs of images along a continuum of strength pertaining to a visual attribute yields improvements in a variety of vision tasks. In this paper, we show how emerging ideas in graph neural networks can yield a solution to various problems that broadly fall under relative attribute learning. Our main idea is the observation that relative attribute learning naturally benefits from exploiting the graph of dependencies among the different relative attributes of images, especially when only partial ordering is provided at training time. We use message passing to perform end to end learning of the image representations, their relationships as well as the interplay between different attributes. Our experiments show that this simple framework is effective in achieving competitive accuracy with specialized methods for both relative attribute learning and binary attribute prediction, while relaxing the requirements on the training data and/or the number of parameters, or both. <s> BIB010
Relative attribute learning aims to formulate functions to rank the relative strength of attributes , which can be widely applied in object detection BIB004 ), finegrained visual comparison , and facial attribute estimation BIB009 ). The general insight in this line of work is learning global image representations in a unified framework BIB001 BIB002 or capturing part-based representations via pretrained part detectors BIB003 BIB005 BIB006 ). However, the former ignores the localizations of attributes, and the latter ignores the correlations among attributes. Consequently, both the two might collapse the performance of relative attribute ranking. BIB007 first propose automatically discovering the spatial extent of relevant attributes by establishing a set of visual chains indicating the local and transitive connections. In this way, the locations of attributes can be learned automatically in an end-to-end way. Although no pretrained detectors are used, the optimization pipeline still contains several independent modules, resulting in a suboptimal solution. To tackle this issue, BIB008 construct an end-to-end deep CNN for simultaneously learning features, localizations, and ranks of facial attributes with weakly supervised pair-wise images. Specifically, given pairs of training images ordered according to the relative strength of an attribute, two Siamese networks receive these images, where each takes one of a pair as input and builds a single branch. Each branch contains two components: the spatial transformer network (STN), which generates image transformation parameters for localizing the most relevant regions, and the ranker network (RN), which outputs the predicted attribute scores. The qualitative experiment results over LFW-10 dataset show excellent performance in attribute region localization and ranking accuracy. To model the pair-wise relationships between images for multiple attributes, BIB010 construct a graph model, where each node represents an image and edges indi-cate the relationships between images and attributes, as well as between images and images. The overall framework consists of two components: the CNN for extracting primary features of the node images, and the graph neural network (GNN) for learning the features of edges and following updates. Thus, the relationships among all the images are modeled by an fully-connected graph over the learned CNN features. Then, a gated recurrent unit (GRU) takes the node and its corresponding information as inputs and outputs the updated node. As a result, the correlations among attributes can be learned by using information from the neighbors of the node, as well as by updating its state based on the previous state.
A Survey of Deep Facial Attribute Analysis <s> Adversarial Robustness in Facial Attribute Analysis <s> Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Adversarial Robustness in Facial Attribute Analysis <s> Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Adversarial Robustness in Facial Attribute Analysis <s> Facial attributes are emerging soft biometrics that have the potential to reject non-matches, for example, based on mismatching gender. To be usable in stand-alone systems, facial attributes must be extracted from images automatically and reliably. In this paper, we propose a simple yet effective solution for automatic facial attribute extraction by training a deep convolutional neural network (DCNN) for each facial attribute separately, without using any pre-training or dataset augmentation, and we obtain new state-of-the-art facial attribute classification results on the CelebA benchmark. To test the stability of the networks, we generated adversarial images -- formed by adding imperceptible non-random perturbations to original inputs which result in classification errors -- via a novel fast flipping attribute (FFA) technique. We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not. This result is surprising because no DCNNs tested to date have exhibited robustness to adversarial images without explicit augmentation in the training procedure to account for adversarial examples. Finally, we introduce the concept of natural adversarial samples, i.e., images that are misclassified but can be easily turned into correctly classified images by applying small perturbations. We demonstrate that natural adversarial samples commonly occur, even within the training set, and show that many of these images remain misclassified even with additional training epochs. This phenomenon is surprising because correcting the misclassification, particularly when guided by training data, should require only a small adjustment to the DCNN parameters. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Adversarial Robustness in Facial Attribute Analysis <s> A face image not only provides details about the identity of a subject but also reveals several attributes such as gender, race, sexual orientation, and age. Advancements in machine learning algorithms and popularity of sharing images on the World Wide Web, including social media websites, have increased the scope of data analytics and information profiling from photo collections. This poses a serious privacy threat for individuals who do not want to be profiled. This research presents a novel algorithm for anonymizing selective attributes which an individual does not want to share without affecting the visual quality of images. Using the proposed algorithm, a user can select single or multiple attributes to be surpassed while preserving identity information and visual content. The proposed adversarial perturbation based algorithm embeds imperceptible noise in an image such that attribute prediction algorithm for the selected attribute yields incorrect classification result, thereby preserving the information according to user's choice. Experiments on three popular databases i.e. MUCT, LFWcrop, and CelebA show that the proposed algorithm not only anonymizes k-attributes, but also preserves image quality and identity information. <s> BIB004
Adversarial images, which are generated from the network topology, training process, and hyperparameter variation by adding slight artificial perturbations, can be used as inputs of deep facial attribute analysis models. By classifying the original inputs correctly and misclassifying the adversarial inputs, the robustness of models can be improved. BIB001 first propose that neural networks can be induced to misclassify an image by carefully chosen perturbations that are imperceptible to human. Following this work, the study of adversarial images is entering the horizons of researchers. induce small artificial perturbations on existing misclassified inputs to correct the results of attribute classification. Specifically, the adversarial images are generated over a random subset of CelebA dataset via the fast flipping attribute (FFA) technique. FFA algorithm leverages the back-propagation of the Euclidean loss to generate adversarial images. During this process, it flips the binary decision of the deep network without ground-truth labels. Through the robustness analysis, FFA has better performance in generating more adversarial examples than the existing fast gradient sign (FGS) method BIB002 on the designed separate attribute networks BIB003 . Moreover, FFA algorithm is extended to an iterative version, namely iterative FFA, to ensure the use for multi-objective networks, e.g., MOON . The experiments demonstrate that the quality of adversarial examples of iterative FFA is more satisfactory than its base version, and iterative FFA can flip attribute prediction results more frequently. Despite the promising performance of these two types of FFAs, several attributes still could not be flipped over on separately trained deep models. In addition, attribute anonymity, which conceals specific facial attributes that an individual does not want to share, is another adversarial robustness related task. When hiding corresponding attributes, the remaining attributes should be maintained, and the visual quality of images should not be damaged. BIB004 achieve this basic target by adding adversarial perturbations to an attribute preservation set and an attribute suppression set. Consequently, the prediction of a specific attribute from the true category can be classified into a different target category. In summary, the study of adversarial robustness contributes to improving the representational stability of current deep FAE algorithms. Additionally, due to the attack of adversarial examples, the robustness of deep facial attribute analysis models is moving towards a promising direction.
A Survey of Deep Facial Attribute Analysis <s> Data <s> The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> We propose a self-supervised framework for learning facial attributes by simply watching videos of a human face speaking, laughing, and moving over time. To perform this task, we introduce a network, Facial Attributes-Net (FAb-Net), that is trained to embed multiple frames from the same video face-track into a common low-dimensional space. With this approach, we make three contributions: first, we show that the network can leverage information from multiple source frames by predicting confidence/attention masks for each frame; second, we demonstrate that using a curriculum learning regime improves the learned embedding; finally, we demonstrate that the network learns a meaningful face embedding that encodes information about head pose, facial landmarks and facial expression, i.e. facial attributes, without having been supervised with any labelled data. We are comparable or superior to state-of-the-art self-supervised methods on these tasks and approach the performance of supervised methods. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> Recent research progress in facial attribute recognition has been dominated by small improvements on the only large-scale publicly available benchmark dataset, CelebA [18]. We propose to extend attribute prediction research to unconstrained videos. Applying attribute models trained on CelebA – a still image dataset – to video data highlights several major problems with current models, including the lack of consideration for both time and motion. Many facial attributes (e.g. gender, hair color) should be consistent throughout a video, however, current models do not produce consistent results. We introduce two methods to increase the consistency and accuracy of attribute responses in videos: a temporal coherence constraint, and a motionattention mechanism. Both methods work on weakly labeled data, requiring attribute labels for only one frame in a sequence, which we call the anchor frame. The temporal coherence constraint moves the network responses of non-anchor frames toward the responses of anchor frames for each sequence, resulting in more stable and accurate attribute predictions. We use the motion between anchor and non-anchor video frames as an attention mechanism, discarding the information from parts of the non-anchor frame where no motion occurred. This motion-attention focuses the network on the moving parts of the non-anchor frames (i.e. the face). Since there is no large-scale video dataset labeled with attributes, it is essential for attribute models to be able to learn from weakly labeled data. We demonstrate the effectiveness of the proposed methods by evaluating them on the challenging YouTube Faces video dataset [31]. The proposed motion-attention and temporal coherence methods outperform attribute models trained on CelebA, as well as those fine-tuned on video data. To the best of our knowledge, this paper is the first to address the problem of facial attribute prediction in video. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> This study introduces a novel conditional recycle generative adversarial network for facial attribute transformation, which can transform high-level semantic face attributes without changing the identity. In our approach, we input a source facial image to the conditional generator with target attribute condition to generate a face with the target attribute. Then we recycle the generated face back to the same conditional generator with source attribute condition. A face which should be similar to that of the source face in personal identity and facial attributes is generated. Hence, we introduce a recycle reconstruction loss to enforce the final generated facial image and the source facial image to be identical. Evaluations on the CelebA dataset demonstrate the effectiveness of our approach. Qualitative results show that our approach can learn and generate high-quality identity-preserving facial images with specified attributes. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Data <s> Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https://github.com/Prinsphield/ELEGANT. <s> BIB007
The development of deep neural networks makes FAE a data-driven task. That means large numbers of samples are required for training deep models to capture attributerelevant facial details. However, contemporary studies suffer from insufficient training data. In this case, deep neural networks would easily fit the data characteristics contained only in a small number of images and have degraded performance. In the following, taking two commonly used datasets as examples (i.e., CelebA and LFWA), we analyze the data challenges that exist in current facial attribute databases from the perspectives of data sources, data quality, and imbalanced data, respectively. First, from the perspective of data sources, CelebA collects face data and attribute labels from the celebrities, and the samples of LFWA come from online news. There is no doubt that these databases are inherently biased and do not match the general data distributions in the real world. For example, the bald attribute corresponds to a small number of samples in CelebA, but in the real world, it is a common attribute among ordinary people. Hence, more complementary facial attribute datasets that cover more real-world scenarios and a wider range of facial attributes need to be constructed in the future. An earlier work BIB001 has made an attempt to extract images from the real-world outdoor videos, i.e., Ego-Humans dataset. However, it contains more pedestrian attributes, and only several facial attributes are predicted. Nevertheless, we believe that this dataset provides an inspired idea for collecting more facial attribute-relevant images from videos in real-world scenes BIB004 ). Furthermore, BIB005 have made the first attempt to estimate facial attributes in videos. They use weakly labeled data in YouTube Faces Dataset (with attribute labels) to keep attribute prediction consistent and accurate in videos, by imposing a temporal coherence constraint and a motion-attention mechanism. The temporal coherence constraint ensures the response invariability between video frames by transferring responses from labeled frames to unlabeled ones. Meanwhile, the motion-attention mechanism enforces their model to focus on face parts through exploring the motion relationship between labeled and unlabeled frames. On the one hand, this research significantly highlights the importance of temporal and motion factors when designing video-based deep FAE models. On the other hand, it also expresses the expectation for labeling new video datasets with facial attributes in future study. Second, from the perspective of data quality, most faces in CelebA and LFWA are frontal and aligned images with high quality ). However, real-world data always have low-quality, partially visible images with various illumination and poses. Thus, attribute prediction models trained on these images could hardly learn representative features of real-world data. Therefore, we expect that more adequate real-world training data would come out to strengthen the estimation abilities of future attribute classifiers. Finally, for CelebA, LFWA, or real-world face images, imbalanced data would induce attribute estimation models to pay more attention to learning the features of majority samples. Consequently, learned biased attribute classifiers could not identify the minorities in some extreme cases. Although many efforts have been made to solve this class-imbalance learning issue from the perspective of algorithms, as mentioned in Sect. 6.1, data support is still an urgent need. Besides, the test datasets (i.e., target domains), may have different distributions from the training datasets (i.e., source domains). It is generally called domain adaption issue, which can be taken as a distribution imbalance. That means once the source data have a particular property, the given target domain would not always follow the same pattern. Therefore, such a discrepancy between data distributions would negatively impact the generalization ability over unseen test data and lead to significant performance deterioration. Therefore, on the one hand, we anticipate that more available facial attribute images can be released to capture discriminative features of majority and minority samples equally well in terms of class-imbalance data. On the other hand, more algorithms are expected to be developed to solve the domain adaption issue in attribute estimation. In this section, we start with the problems of current FAM databases and analyze the challenges and the opportunities related to data sources. Then, we express an expectation for the video data type, as we have done in the discussion of facial attribute prediction. Finally, taking the performance metrics into account, we believe that future deep FAM methods need to establish a unified standard for evaluating their experiment results. First, in terms of data sources, note that almost all deep FAM algorithms are trained over CelebA database, while very few of them also use LFW dataset. The data sources are extremely inadequate, and facial attributes that can be manipulated are considerably limited. For 40 annotated attributes, only several notable attributes [e.g., hair colors BIB006 , glasses BIB002 , and smiling BIB007 ] can achieve satisfactory performance. Such limitation could cause a degradation in performance when manipulating various attribute types. Therefore, we expect that more high-quality facial attribute databases could be released and that more kinds of facial attributes could be manipulated in the future. Second, from the perspective of the data type, FAM on the video data still has not been studied. Manipulating video facial attributes requires models to yield lifelike details. When faces change with the frames of videos, models can still locate the to-be-manipulated areas precisely and keep the consistency of attribute manipulation for the same identity. Nevertheless, this task is valuable in many entertainment situations in the real world, such as beauty makeup videos. The hair colors in the videos might be varied according to users' preference. However, to date, there is no available large-scale video data for training video-based attribute manipulation models. The possible reasons might be that it is difficult to track and annotate facial attributes in large-scale videos due to spatial and temporal dynamics BIB003 , and the quality of video data could have significant effects on such a synthesis task. We expect that the focus will be shifted to collect and annotate video data with facial attributes for promoting the video-based deep FAM task further. Finally, from the perspective of performance metrics, as mentioned in Sect. 3, contemporary research either evaluates generated images by statistical surveys or seeks help from other face-related tasks, such as attribute estimation and landmark detection. Unified and standard metric systems have not yet formed in terms of qualitative and quantitative analyses. We expect that the metrics of deep FAM methods could be well developed and establish a relatively unified rule in the future.
A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Attribute recognition, particularly facial, extracts many labels for each image. While some multi-task vision problems can be decomposed into separate tasks and stages, e.g., training independent models for each task, for a growing set of problems joint optimization across all tasks has been shown to improve performance. We show that for deep convolutional neural network (DCNN) facial attribute extraction, multi-task optimization is better. Unfortunately, it can be difficult to apply joint optimization to DCNNs when training data is imbalanced, and re-balancing multi-label data directly is structurally infeasible, since adding/removing data to balance one label will change the sampling of the other labels. This paper addresses the multi-label imbalance problem by introducing a novel mixed objective optimization network (MOON) with a loss function that mixes multiple task objectives with domain adaptive re-weighting of propagated loss. Experiments demonstrate that not only does MOON advance the state of the art in facial attribute recognition, but it also outperforms independently trained DCNNs using the same data. When using facial attributes for the LFW face recognition task, we show that our balanced (domain adapted) network outperforms the unbalanced trained network. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Humans focus attention on different face regions when recognizing face attributes. Most existing face attribute classification methods use the whole image as input. Moreover, some of these methods rely on fiducial landmarks to provide defined face parts. In this paper, we propose a cascade network that simultaneously learns to localize face regions specific to attributes and performs attribute classification without alignment. First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes. Then multiple part-based networks and a whole-image-based network are separately constructed and combined together by the region switch layer and attribute relation layer for final attribute classification. A multi-net learning method and hint-based model compression is further proposed to get an effective localization model and a compact classification model, respectively. Our approach achieves significantly better performance than state-of-the-art methods on unaligned CelebA dataset, reducing the classification error by 30.9%. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> In this paper, we propose a new deep framework which predicts facial attributes and leverage it as a soft modality to improve face identification performance. Our model is an end to end framework which consists of a convolutional neural network (CNN) whose output is fanned out into two separate branches; the first branch predicts facial attributes while the second branch identifies face images. Contrary to the existing multi-task methods which only use a shared CNN feature space to train these two tasks jointly, we fuse the predicted attributes with the features from the face modality in order to improve the face identification performance. Experimental results show that our model brings benefits to both face identification as well as facial attribute prediction performance, especially in the case of identity facial attributes such as gender prediction. We tested our model on two standard datasets annotated by identities and face attributes. Experimental results indicate that the proposed model outperforms most of the current existing face identification and attribute prediction methods. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> We present a novel and unified deep learning framework which is capable of learning domain-invariant representation from data across multiple domains. Realized by adversarial training with additional ability to exploit domain-specific information, the proposed network is able to perform continuous cross-domain image translation and manipulation, and produces desirable output images accordingly. In addition, the resulting feature representation exhibits superior performance of unsupervised domain adaptation, which also verifies the effectiveness of the proposed model in learning disentangled features for describing cross-domain data. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the ad- ditional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> Algorithms <s> Visual explanation enables humans to understand the decision making of deep convolutional neural network (CNN), but it is insufficient to contribute to improving CNN performance. In this paper, we focus on the attention map for visual explanation, which represents a high response value as the attention location in image recognition. This attention region significantly improves the performance of CNN by introducing an attention mechanism that focuses on a specific region in an image. In this work, we propose Attention Branch Network (ABN), which extends a response-based visual explanation model by introducing a branch structure with an attention mechanism. ABN can be applicable to several image recognition tasks by introducing a branch for the attention mechanism and is trainable for visual explanation and image recognition in an end-to-end manner. We evaluate ABN on several image recognition tasks such as image classification, fine-grained recognition, and multiple facial attribute recognition. Experimental results indicate that ABN outperforms the baseline models on these image recognition tasks while generating an attention map for visual explanation. Our code is available. <s> BIB009
As mentioned before, part-based deep FAE methods and holistic deep FAE methods develop in parallel. The former pays more attention to locating attributes, and the latter concentrates more on modeling attribute relationships. Below, we provide the main challenges from the perspective of algorithms and analyze the future trends for both types of methods. For the part-based methods, earlier methods draw support from existing part detectors to discover facial components. However, these detected parts of faces are coarse and attribute-independent. They only distinguish the whole face from the other face-irrelevant parts, such as the background in an image. Considering that existing detectors are not customized for deep FAE, some researchers begin to seek help from other face-related auxiliary tasks, which focus more on facial details rather than the whole face. There are also some studies that utilize labeled key points to partition facial regions. However, well-labeled facial images are not always available in real-world applications, and the performance of auxiliary tasks would limit the accuracy of the downstream classification task. We believe that an end-to-end strategy would dominate future part-based deep FAE algorithms, where the attributerelevant regions and the corresponding prediction can be yielded in a unified framework BIB009 ). BIB002 have attempted to tackle this issue, but learning a region for each attribute is cumbrous and computationally expensive. This is because several attributes might appear in the same region of a face. In addition, part-based methods show great superiority when dealing with data under in-the-wild environmental conditions, such as illumination variations, occlusions, and non-frontal faces. Through learning the locations of different attributes, part-based methods integrate the information from non-occluded areas to predict attributes in occluded areas. BIB004 address this issue by partitioning facial parts manually according to key points. However, such annotations are not always available. Attempting to integrate these non-occluded areas adaptively is becoming a future trend. Besides, BIB004 test their model's attribute estimation performance on partial faces by adding occlusions artificially over original databases, but this operation is not normative for the test protocol. Therefore, the lack of data under the in-the-wild conditions is still a challenge for training deep FAE networks in the wild environment. For holistic methods, state-of-the-art approaches design networks with different architectures for sharing common features and learning attribute-specific features at different layers. However, these methods define attribute relationships to design networks by grouping attributes manually, which can be taken as extra prior information. Since different indi-viduals might give different attribute partitions according to locations or semantics, it is difficult to determine that which facial attribute groups are suitable and optimal. Therefore, how to discover attribute relationships adaptively in the training process, without given prior information artificially, should be the focus of future works. In addition, facial attributes have been taken as auxiliary and complementary information for many face-related tasks, such as face recognition (Kumar et al. 2009; BIB001 BIB005 , face detection , and facial landmark localization ). Kumar et al. (2009) first introduce the concept of 'attribute' to facilitate face verification by compact visual descriptions and low-level attribute features. In contrast, BIB001 utilize the mixed objective optimization network with the Euclidean loss to learn deep attribute features for promoting facial verification. Experiments illustrate that despite only 40 attributes being used, the work of BIB001 still performs better than that of Kumar et al. (2009) , which extracts features of 73 facial attributes. Apart from employing features learned by attribute prediction to assist face recognition, joint and incorporative learning of facial attribute relevant tasks can further enhance their respective robustness and performance by discovering complementary information. For example, considering the inherent dependencies of face-related tasks, design a cascaded CNN for simultaneously learning face detection, facial landmark localization, and facial attribute estimation under a multi-task framework to improve the performance of each task. They further attempt to perform joint face recognition and facial attribute estimation when taking the relationship between identities and attributes into account. Therefore, it is reasonable to believe that the combination of different face-related tasks is becoming a promising research direction due to the complementary relationships among them. State-of-the-art deep FAM methods can be grouped into two categories: model-based methods and extra condition-based methods. Model-based methods tackle an attribute domain transfer issue and use the adversarial loss to supervise the process of image generation. Extra condition-based methods alter desired attributes with given conditional attributes concatenated with to-be-manipulated images in encoding spaces. The main difference between the two types of methods is whether extra conditions are required. Model-based methods take no extra conditions as inputs, and one trained model only changes one corresponding attribute. This strategy is task-specific and helps to generate more photorealistic images, but it is difficult to guarantee attribute-irrelevant details are unchanged due to its operation based on the whole image directly. Few methods focus on this issue, except for ResGAN proposed by BIB003 . However, ResGAN generates residual images for locating attribute-relevant regions under the sparsity constraint. Such a constraint relies heavily on control parameters but not attributes themselves. Hence, how to design networks to synthesize desired photorealistic attributes, as well as keep other attribute-irrelevant details unchanged, is a significant challenge in the future. In addition, as multi-domain transfer has become a hot research topic BIB006 BIB007 , we expect that these novel domain transfer algorithms would migrate to deep FAM methods for yielding more appealing performance. Extra condition-based methods take attribute vectors or reference exemplars as conditions. These algorithms edit facial attributes by changing values of attribute vectors or latent codes of reference exemplars. One advantage of this strategy is multiple attributes can be manipulated simultaneously by altering multiple corresponding values of conditions. However, the concomitant disadvantage is also inevitable. That is, these methods cannot change attributes continuously since the values of attribute vectors are edited discretely. We believe that this shortcoming can be solved by interpolation schemes BIB008 or semantic component decomposition in the future. In addition, as mentioned before, reference exemplar based algorithms are becoming a promising research direction. More specific details that appear in reference images can be explored to generate more photorealistic images compared with merely altering attribute vectors manually.
A Survey of Deep Facial Attribute Analysis <s> Applications <s> Computer vision applications for mobile phones are gaining increasing attention due to several practical needs resulting from the popularity of digital cameras in today's mobile phones. In this work, we consider the task of face detection and authentication in mobile phones and experimentally analyze a face authentication scheme using Haar-like features with Ad-aBoost for face and eye detection, and local binary pattern (LBP) approach for face authentication. For comparison, another approach to face detection using skin color for fast processing is also considered and implemented. Despite the limited CPU and memory capabilities of today's mobile phones, our experimental results show good face detection performance and average authentication rates of 82% for small-sized faces (40times40 pixels) and 96% for faces of 80times80 pixels. The system is running at 2 frames per second for images of 320times240 pixels. The obtained results are very promising and assess the feasibility of face authentication in mobile phones. Directions for further enhancing the performance of the system are also discussed. <s> BIB001 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> In this paper, we present a compositional and dynamic model for face aging. The compositional model represents faces in each age group by a hierarchical And-or graph, in which And nodes decompose a face into parts to describe details (e.g., hair, wrinkles, etc.) crucial for age perception and Or nodes represent large diversity of faces by alternative selections. Then a face instance is a transverse of the And-or graph-parse graph. Face aging is modeled as a Markov process on the parse graph representation. We learn the parameters of the dynamic model from a large annotated face data set and the stochasticity of face aging is modeled in the dynamics explicitly. Based on this model, we propose a face aging simulation and prediction algorithm. Inversely, an automatic age estimation algorithm is also developed under this representation. We study two criteria to evaluate the aging results using human perception experiments: (1) the accuracy of simulation: whether the aged faces are perceived of the intended age group, and (2) preservation of identity: whether the aged faces are perceived as the same person. Quantitative statistical analysis validates the performance of our aging model and age estimation algorithm. <s> BIB002 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> Automatic face recognition in unconstrained environments is a challenging task. To test current trends in face recognition algorithms, we organized an evaluation on face recognition in mobile environment. This paper presents the results of 8 different participants using two verification metrics. Most submitted algorithms rely on one or more of three types of features: local binary patterns, Gabor wavelet responses including Gabor phases, and color information. The best results are obtained from UNILJ-ALP, which fused several image representations and feature types, and UC-HU, which learns optimal features with a convolutional neural network. Additionally, we assess the usability of the algorithms in mobile devices with limited resources. <s> BIB003 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> As mobile devices are becoming more ubiquitous, it becomes important to continuously verify the identity of the user during all interactions rather than just at login time. This paper investigates the effectiveness of methods for fully-automatic face recognition in solving the Active Authentication (AA) problem for smartphones. We report the results of face authentication using videos recorded by the front camera. The videos were acquired while the users were performing a number of tasks under three different ambient conditions to capture the type of variations caused by the 'mobility' of the devices. An inspection of these videos reveal a combination of favorable and challenging properties unique to smartphone face videos. In addition to variations caused by the mobility of the device, other challenges in the dataset include occlusion, occasional pose changes, blur and face/fiducial points localization errors. We evaluate still image and image set-based authentication algorithms using intensity features extracted around fiducial points. The recognition rates drop dramatically when enrollment and test videos come from different sessions. We will make the dataset and the computed features publicly available1 to help the design of algorithms that are more robust to variations due to factors mentioned above. <s> BIB004 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> We present a method using facial attributes for continuous authentication of smartphone users. We train a bunch of binary attribute classifiers which provide compact visual descriptions of faces. The learned classifiers are applied to the image of the current user of a mobile device to extract the attributes and then authentication is done by simply comparing the calculated attributes with the enrolled attributes of the original user. Extensive experiments on two publicly available unconstrained mobile face video datasets show that our method is able to capture meaningful attributes of faces and performs better than the previously proposed LBP-based authentication method. We also provide a practical variant of our method for efficient continuous authentication on an actual mobile device by doing extensive platform evaluations of memory usage, power consumption, and authentication speed. Display Omitted Facial attributes are effective for continuous authentication on mobile devices.Attribute-based features are more robust than the low-level ones for authentication.Fusion of attribute-based and low-level features gives the best result.The proposed approach allows fast and energy efficient enrollment and authentication. <s> BIB005 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method. <s> BIB006 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> Facial makeup transfer aims to translate the makeup style from a given reference makeup face image to another non-makeup one while preserving face identity. Such an instance-level transfer problem is more challenging than conventional domain-level transfer tasks, especially when paired data is unavailable. Makeup style is also different from global styles (e.g., paintings) in that it consists of several local styles/cosmetics, including eye shadow, lipstick, foundation, and so on. Extracting and transferring such local and delicate makeup information is infeasible for existing style transfer methods. We address the issue by incorporating both global domain-level loss and local instance-level loss in an dual input/output Generative Adversarial Network, called BeautyGAN. Specifically, the domain-level transfer is ensured by discriminators that distinguish generated images from domains' real samples. The instance-level loss is calculated by pixel-level histogram loss on separate local facial regions. We further introduce perceptual loss and cycle consistency loss to generate high quality faces and preserve identity. The overall objective function enables the network to learn translation on instance-level through unsupervised adversarial learning. We also build up a new makeup dataset that consists of 3834 high-resolution face images. Extensive experiments show that BeautyGAN could generate visually pleasant makeup faces and accurate transferring results. Data and code are available at http://liusi-group.com/projects/BeautyGAN. <s> BIB007 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles. <s> BIB008 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> Deep neural networks have recently been used to edit images with great success, in particular for faces. However, they are often limited to only being able to work at a restricted range of resolutions. Many methods are so flexible that face edits can often result in an unwanted loss of identity. This work proposes to learn how to perform semantic image edits through the application of smooth warp fields. Previous approaches that attempted to use warping for semantic edits required paired data, i.e. example images of the same subject with different semantic attributes. In contrast, we employ recent advances in Generative Adversarial Networks that allow our model to be trained with unpaired data. We demonstrate face editing at very high resolutions (4k images) with a single forward pass of a deep network at a lower resolution. We also show that our edits are substantially better at preserving the subject's identity. The robustness of our approach is demonstrated by showing plausible image editing results on the Cub200 birds dataset. To our knowledge this has not been previously accomplished, due the challenging nature of the dataset. <s> BIB009 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> This paper presents a comprehensive study of post-mortem human iris recognition carried out for 1200 near-infrared and 1787 visible-light samples collected from 37 deceased individuals kept in mortuary conditions. We used four independent iris recognition methods (three commercial and one academic) to analyze genuine and impostor comparison scores and check the dynamics of iris quality decay over a period of up to 814 h after death. This study shows that post-mortem iris recognition may be close-to-perfect approximately 5–7 h after death and occasionally is still viable even 21 days after death. These conclusions contradict the statements present in the past literature that the iris is unusable as a biometrics shortly after death, and show that the dynamics of post-mortem changes to the iris that are important for biometric identification are more moderate than previously hypothesized. This paper contains a thorough medical commentary that helps to understand which post-mortem metamorphoses of the eye may impact the performance of automatic iris recognition. An important finding is that false-match probability is higher when live iris images are compared with post-mortem samples than when only live samples are used in comparisons. This paper conforms to reproducible research and the database used in this study is made publicly available to facilitate research on post-mortem iris recognition. To the best of our knowledge, this paper offers the most comprehensive evaluation of post-mortem iris recognition and the largest database of post-mortem iris images. <s> BIB010 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> We present a deep learning-based method for removing makeup effects (de-makeup) in a face image. This problem poses a major challenge due to obscuring of the underlying facial features by cosmetics, which is very important in multimedia applications in the field of security, entertainment, and social networking. To address this task, we propose the bidirectional tunable de-makeup network (BTD-Net), which jointly learns the makeup process to aid in learning the de-makeup process. For tractable learning of the makeup process, which is a one-to-many mapping determined by the cosmetics that are applied, we introduce a latent variable that reflects the makeup style. This latent variable is extracted in the de-makeup process and used as a condition on the makeup process to constrain the one-to-many mapping to a specific solution. Through extensive experiments, our proposed BTD-Net is found to surpass the state-of-art techniques in estimating realistic non-makeup faces that correspond to the input makeup images. We additionally show that applications such as tuning the amount of makeup can be enhanced through the use of this method. <s> BIB011 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> This paper presents a novel approach for synthesizing automatically age-progressed facial images in video sequences using Deep Reinforcement Learning. The proposed method models facial structures and the longitudinal face-aging process of given subjects coherently across video frames. The approach is optimized using a long-term reward, Reinforcement Learning function with deep feature extraction from Deep Convolutional Neural Network. Unlike previous age-progression methods that are only able to synthesize an aged likeness of a face from a single input image, the proposed approach is capable of age-progressing facial likenesses in videos with consistently synthesized facial features across frames. In addition, the deep reinforcement learning method guarantees preservation of the visual identity of input faces after age-progression. Results on videos of our new collected aging face AGFW-v2 database demonstrate the advantages of the proposed solution in terms of both quality of age-progressed faces, temporal smoothness, and cross-age face verification. <s> BIB012 </s> A Survey of Deep Facial Attribute Analysis <s> Applications <s> Since it is difficult to collect face images of the same subject over a long range of age span, most existing face aging methods resort to unpaired datasets to learn age mappings. However, the matching ambiguity between young and aged face images inherent to unpaired training data may lead to unnatural changes of facial attributes during the aging process, which could not be solved by only enforcing identity consistency like most existing studies do. In this paper, we propose an attribute-aware face aging model with wavelet based Generative Adversarial Networks (GANs) to address the above issues. To be specific, we embed facial attribute vectors into both the generator and discriminator of the model to encourage each synthesized elderly face image to be faithful to the attribute of its corresponding input. In addition, a wavelet packet transform (WPT) module is incorporated to improve the visual fidelity of generated images by capturing age-related texture details at multiple scales in the frequency space. Qualitative results demonstrate the ability of our model in synthesizing visually plausible face images, and extensive quantitative evaluation results show that the proposed method achieves state-of-the-art performance on existing datasets. <s> BIB013
Various viewpoints of the same person are difficult challenges for maintaining the identity-attribute consistency in deep FAE methods. On the one hand, such viewpoint diversification helps to learn richer features from the same person. On the other hand, images of different viewpoints might differ in attributes even from the same identity. For example, the side face images might yield different prediction results with the front face images for the high cheekbones, as the side face images do not emphasize this attribute. Therefore, attribute inconsistency becomes a severe problem in various viewpoints for the same identity. BIB006 propose a probabilistic confidence criterion to address this inconsistency issue. Specifically, this criterion first extracts the most confident face image for each subject, and then it chooses the result corresponding to the highest confidence as the final prediction of each attribute concerning each subject. However, filtering the most confident image via relevant criteria might not be the most optimal strategy, because the features from all images with different views are not taken full advantage of in making the favorable estimation. Nowadays, digital mobile devices contain considerable amounts of valuable personal information, such as bank accounts and private emails BIB005 ). These personal details make these devices the targets of various attacks. Hence, biological characteristics, such as fingerprints and irises BIB010 , have been widely used as device passwords for further protecting the privacy information of users. This technique is called biometric verification. Recently, an increasing number of biometric verification based algorithms have emerged as a solution for continuous authentication on mobile devices. Many researchers have committed to designing active authentication algorithms based on face biometrics. For example, studies in BIB004 , BIB003 , BIB001 detect faces through camera sensor images and further extract low-level features for the authentication of smartphone users. Considering that facial attributes contain more detailed characteristics than the full face, we believe that facial attributes would bring new opportunities for biometric identification in real-world applications. BIB005 have attempted the active authentication of mobile devices by facial attributes. A set of binary attribute classifiers are trained to estimate whether attributes are present in images of the current user in a mobile device. Consequently, the authentication can be implemented by comparing the recognized attributes with the originally enrolled attributes. However, BIB005 extract traditional features, such as the LBP feature, which are not task-specific for attribute estimation and less discriminative than deep features. To some extent, these traditional features and SVM classifiers balances the verification accuracy and mobile performance, whereas other methods with satisfactory performance might have tremendous computation or memory costs. Therefore, future challenges mainly lie in two aspects. The first is to better apply facial attributes for mobile device authentication. The second is exploring more discriminative deep features and classifiers under the constraints of the trade-off between verification accuracy and mobile performance. Nevertheless, we expect that facial attributes would contribute to further advance the progress of biometric verification on digital mobile devices. Face makeup BIB007 BIB008 BIB011 ) and face aging BIB002 BIB012 BIB013 ) are two hot topics in deep FAM related applications. They have played important roles in mobile device entertainment (e.g., beauty cameras) and identityrelevant face verification. Compared with general FAM, they focus more on more subtle face attribute details. For face makeup, it concentrates more on makeup related attributes, such as the types of eyeshadows and the colors of lipsticks. The focus of studies lies on facial makeup transfer and removal BIB008 BIB011 , where makeup transfer aims to map one makeup style to another for generating different makeup styles BIB007 , and makeup removal performs an opposite process which cleans off the existing makeup and provides support to makeup-invariant face verification BIB011 . In terms of face aging, it renders face images with a wide range of ages and keeps identity information insusceptible. Hence, this task can not only be applied to digital entertainment but also provide support to social safety, such as fugitive researches and cross-age identity verification. The most crucial issue in face aging is that there are no sufficient paired images for the same person at different ages BIB013 . Recently, the development of deep learning has lead face makeup and face aging to promising results, and they have become important research branches independent of general deep FAM methods. We expect the development of these two branches would bring out a hopeful prospect of future real-world applications. Besides, resolution limitation is another tough challenge in real-world facial manipulation. Existing methods only work well with a limited range of resolutions and under lab conditions. This limitation encourages combining face super-resolution with deep FAM algorithms. For example, BIB006 propose a conditional version of CycleGAN to generate face images under the guidance of attributes for face super-resolution. Specifically, conditional CycleGAN takes a pair of low/high-resolution faces and an attribute vector extracted from the high-resolution one as inputs. Conditioned on attributes of the original high-resolution image, this model learns to generate a highresolution version of the original low-resolution image. Moreover, BIB009 apply smooth warp fields to GANs for manipulating face images with very high resolutions through a deep network at a lower resolution. All these schemes inspire researchers to integrate state-of-theart face super-resolution methods into attribute manipulation for achieving a win-win situation.
A Survey on Star Identification Algorithms <s> The Beginning <s> AbstractIn this paper, we discuss an attitude determination approach for autonomous,on-board, near-real-timedetermination of spacecraft attitude with sub-ten-arc-secondprecision. The method employs a systematic pattern recognition procedure foridentifying stars sensed by, for example, charged coupled device (CCD) electro­optical star sensors. An extended Kalman filter is used to predict spacecraft attitudeat each data gathering epoch. A parallel processing division of the computations andlogic associated with data acquisition/editing, pattern recognition/attitude deter­mination and optimal prediction functions is proposed. Three intermittentlycommunicating parallel processes are proposed which appear to optimize the rateand precision of attitude estimation, subject to the contraints of on-board com­putation. Numerical experiments are summarized which support the validity andpracticality of the proposed star pattern recognition/attitude estimation strategy. Introduction In a recent presentation [I]. we proposed a three-process strategy. calledUV ASTAR, for determination of spacecraft attitude. <s> BIB001 </s> A Survey on Star Identification Algorithms <s> The Beginning <s> Abstract : A new strapped-down system for on-board, real-time spacecraft attitude determination is discussed. The electro-optical system is capable of sub-ten-arc second precision with no moving parts. The light-sensitive element is an array-type Charged Coupled Device (CCD) having about 2 x 10 to the 5th power silicon pixels. Parallel, high speed analog circuits scan the pixels (row by row) to locate and A/D convert only those pixel response values (about 100 to 200 per scan) about a preset analog threshold. Angular rate measurements from conventional rate gyros are used to estimate motion continuously. Three intermittently communicating microcomputers operate in parallel to perform the functions: (i) star image centroid determination, (ii) star pattern identification and discrete attitude estimation (subsets of measured stars are identified as specific cataloged stars), (iii) optimal Kalman attitude motion estimation/integration. The system is designed to be self-calibrating with provision for routine updating of interlock angles, gyro bias parameters, and other system calibration parameters. For redundancy and improved precision, two optical ports are employed. This interim report documents Phase I of a three phase effort to research, develop, and laboratory test the basic concepts of this new system. Included in Phase I is definition, formulation, and test of the basic algorithms, including preliminary implementations and results from a laboratory microcomputer system. (Author) <s> BIB002
After the first CCD-based star tracker was developed by Salomon in 1976 at JPL , Junkins, Turner, Strikwerda BIB001 , and others began work on implementing an algorithm that could identify stars in real-time. While they realized the benefit of using the easily-computable sine of inter-star angles as a pattern feature, the key problem that arose was the matching of observed inter-stars to the items in the database. After several years of work and a few conference papers, Junkins et al. published "Star Pattern Recognition and Spacecraft Attitude Determination" in 1981 BIB002 . Although the algorithm was able to identify star triplets, it had the primary limitation of requiring an a priori estimate of the spacecraft's attitude before it was able to perform in real-time. The reason is that Junkins had used "sub catalogs" of the sky, illustrated in Figure 3 , each representing a portion of the sky, in order to accelerate the computation. Although the method was robust to non-stars because the catalog included all combinations of stars that might be observed, it only updated the attitude estimation algorithm once or twice a minute, as contrasted with the angular rate sensors, which updated at 1, 000 times per minute. The majority of the attitude estimation was propagation of the angular rate sensors, and periodic checks were established to confirm and improve the propagated attitude. Junkins' feature extraction runs very fast, in O(b) time, since it may select any three of the observed stars and measure the sine of their inter-star angles. However, since his database search considers every possible permutation of stars available in a given region of sky, the search time is O(f
A Survey on Star Identification Algorithms <s> ). <s> Abstract : A new strapped-down system for on-board, real-time spacecraft attitude determination is discussed. The electro-optical system is capable of sub-ten-arc second precision with no moving parts. The light-sensitive element is an array-type Charged Coupled Device (CCD) having about 2 x 10 to the 5th power silicon pixels. Parallel, high speed analog circuits scan the pixels (row by row) to locate and A/D convert only those pixel response values (about 100 to 200 per scan) about a preset analog threshold. Angular rate measurements from conventional rate gyros are used to estimate motion continuously. Three intermittently communicating microcomputers operate in parallel to perform the functions: (i) star image centroid determination, (ii) star pattern identification and discrete attitude estimation (subsets of measured stars are identified as specific cataloged stars), (iii) optimal Kalman attitude motion estimation/integration. The system is designed to be self-calibrating with provision for routine updating of interlock angles, gyro bias parameters, and other system calibration parameters. For redundancy and improved precision, two optical ports are employed. This interim report documents Phase I of a three phase effort to research, develop, and laboratory test the basic concepts of this new system. Included in Phase I is definition, formulation, and test of the basic algorithms, including preliminary implementations and results from a laboratory microcomputer system. (Author) <s> BIB001 </s> A Survey on Star Identification Algorithms <s> ). <s> Autonomous star sensing and pattern recognition for attitude determination provides many technological challenges to modern spacecraft optical sensor design. This is mostly due to the relatively high accuracy requirements coupled with the faintness of many stellar sources, but is also due to real-time processing constraints. The performance of on-orbit star trackers is typically affected by nonlinearities such as lens distortion, coma, and chromatic aberration, as well as atmospheric refraction, thermal cycling, and possibly vibration. Despite these effects, the precise astrometric knowledge of inertially referenced stellar coordinates, along with thermoelectric cooling of the optical sensor, makes accurate star tracker calibration feasible. Land-based camera calibration, while not afflicted by many of the on-orbit difficulties, gives rise to a different set of problems relating to close range photogrammetry. The purpose of this dissertation is to report on the development and implementation of ideas related to near real-time, close range vision-based attitude sensing. The work begins with a survey of the current state of spacecraft attitude determination techniques, along with a discussion of relevant hardware devices. These ideas are extended to the case of close range photogrammetry for use in the laboratory, and a comprehensive discussion of the current experiments is presented. Topics include derivations of stellar and close range collinearity equations, mathematical modelling, CCD camera calibration techniques, resection and parameter estimation, optical aberrations, image processing and pattern recognition techniques, along with hardware and experimental results. <s> BIB002
In 1986, Groth suggested that a faster way to search the sub-catalogs would be to sort the triangles sides in order based on permutation-invariant values such as the logarithm of the perimeter of a triangle. BIB001 ). He admits, however, that his algorithm runs at high polynomial power of n, much as Junkins' has. Groth's algorithm differs in that the performance has a lower constant factor. While the asymptotic order of the database is identical to Junkins, there would be more data included, associated with the permutation-invariant values. In 1987, Sasaki and others published a patent showing how to improve the search time by using the area of a star triangle and the sum of the luminous intensities as preliminary markers in performing the star identification, using O(b)-time for star feature extraction. His method does not discuss the way in which the database is searched, requiring only that a "number of stars" will be selected from the database by a method using "parallel, serial, or the like" processors. It is not noted whether his database indeed contains as many star triplets as does Junkins' method, nor is the search procedure described. Later, in 1989, Van Bezooijen [2] suggested in his dissertation that the speed of the star pattern recognition algorithm could be improved by making more use of the available information in the star patterns. Van Bezooijen discussed directly the relationship between the number of stars and the amount of information available from a star pattern with a given number of stars. His analysis also included a very detailed statistical probability that a star had been identified correctly. Unfortunately, Van Bezooijen's method sometimes required the spacecraft to slew in order to detect stars for his Star-ID method, and as such, his work is not covered in depth here. In 1991, with Junkins on his advisory committee, David Anderson BIB002 addressed the ambiguity of the order of star triplets by proposing a permutation matrix, and the development of star pattern parameters that were independent of the order in which the stars are selected. Sticking with the tried-and-true startriple pattern, Anderson also proposed the use of an array processor to handle the matrix multiplications required to use his permutation matrices. Unfortunately, the database storage remained O(nf 2 ), and there was no advance made on the database search. Anderson suggested that an array processor be used to perform the matrix multiplication, decreasing the running time of the Star-ID process. The design engineer should note that array processors, while performing a comparatively large number of computations when contrasted with a serial processor, also use a comparatively large amount of power, because they both tend to use identical amounts of energy to perform each computation.
A Survey on Star Identification Algorithms <s> Search Process Acceleration <s> A software system for a star imager for on-line satellite attitude determination is described. The system works with a single standard commercial CCD-camera with a high aperture lens and an onboard star catalogue. It is capable of both an initial course attitude determination without any prior knowledge of the satellite orientation and a high-accuracy attitude determination based on prediction and averaging of several identified star constellations. In the high accuracy mode the star imager aims at an accuracy better than 2 arc sec with a processing time of less than a few seconds. The star imager is developed for the Danish microsatellite Oersted.<<ETX>> <s> BIB001 </s> A Survey on Star Identification Algorithms <s> Search Process Acceleration <s> Many algorithms used today determine spacecraft attitude by identifying stars in the field of view of a star tracker. However, each of these methods require some a priori knowledge of the spacecraft attitude. Some algorithms have been extended to implement a computation-intense full-sky scan. Others require large data bases. Both storage and speed are concerns for autonomous onboard systems.This paper presents an algorithm that, by discretizing the sky and filtering by visual magnitude of the brightest observed star, provides a star identification process that is computationally efficient, compared to existing techniques. A savings in onboard storage of over 80% compared with a popular existing technique is documented. Results of random tests with simulated star fields are presented without false identification and with dramatic increase in speed over full-sky scan methods. <s> BIB002 </s> A Survey on Star Identification Algorithms <s> Search Process Acceleration <s> A six-feature all-sky star-field identification algorithm has been developed for autonomous attitude determination. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. This algorithm has been integrated with a charge-coupleddevice- (CCD-) based imaging camera and tested in an observatory environment. The autonomous intelligent camera identifies in real time any star field without a priori knowledge. Observatory tests on star fields with this intelligent camera are described. <s> BIB003
The next year, Liebe BIB001 made the critical connection of the feature selection process to the database search time, making the Lost-In-Space problem tractable. Liebe suggested the use of the two closest stars to a selected star as components of the star pattern, and the angular separations from the two closest stars and the angle between them as his parameters, as illustrated in Figure 4 , and addressed the situations in which predicted stars would not be seen due to their magnitude being very close to the detection threshold. Liebe also address the situation in which small errors would cause the incorrect selection of the closest two stars when the distance to these stars were similar. Although Leibe's feature extraction process now took O(f lg b)-time to compute, his database could be reduced to O(n), and subsequently, his database search could be performed much faster, though still linear-time. Liebe makes full use of the available angular independent pattern features, neglecting the stellar magnitudes. Liebe later implements the optional recursive direct match mode which could identify the remaining stars up to 20 times faster than the Lost-In-Space algorithm. Baldini is certain to conclude with only one or two possible combinations of stars. Baldini has used five stars, inherently containing twelve independent features, but uses only nine when performing his identification process, suggesting the required field of view may be larger for Baldini's method when compared to other methods to be sure that there will be enough visible stars. Although non-stars will get weeded out in the process, the addition of non-stars to the algorithms increases most of the steps linearly or quadratically. In 1995 Ketchum BIB002 proposed a second sequential filtering algorithm, this time using the brightness of the brightest star to attempt to determine the likelihood of pointing in any particular direction. She then filters the list of possible stars using the brightness of the second brightest star. Although she admits the algorithm would need to search as much as 43% of the catalog for the appropriate stars, she notes that the storage space required by his algorithm is much less than required by Van Bezooijen's method. Furthermore, Ketchum is one of few to give direct empirical data regarding the running-time of his algorithm, reporting it requires up to 63 seconds to run on a 50 MHz processor. Later in 1995, Scholl BIB003 published a more straightforward method. The inter-star angles were to be ordered by their relative brightness, eliminating the permutations that arise when considering the possible orders of three stars. Unfortunately, Scholl retains the O(n f 2 )-sized catalogs, and does not specify the search technique used. While it's true that his method uses less time to search the database when compared to Baldini, it is nonetheless still O(n f 2 ), since faster techniques were not proposed until the following year.
A Survey on Star Identification Algorithms <s> Search Time Reduced Much Further <s> A software system for a star imager for on-line satellite attitude determination is described. The system works with a single standard commercial CCD-camera with a high aperture lens and an onboard star catalogue. It is capable of both an initial course attitude determination without any prior knowledge of the satellite orientation and a high-accuracy attitude determination based on prediction and averaging of several identified star constellations. In the high accuracy mode the star imager aims at an accuracy better than 2 arc sec with a processing time of less than a few seconds. The star imager is developed for the Danish microsatellite Oersted.<<ETX>> <s> BIB001 </s> A Survey on Star Identification Algorithms <s> Search Time Reduced Much Further <s> An autonomous star identification algorithm is described that is simple and requires less computer resources than other such algorithms. In simulations using an 8/spl times/8 degree field of view (FOV), the algorithm identifies the correct section of sky on 99.7% of the sensor orientations where spatial accuracy of the imaged star is 1 pixel (56.25 arc seconds) in standard deviation and the apparent brightness deviates by 0.4 units stellar magnitude. This compares very favorably with other algorithms in the literature. <s> BIB002 </s> A Survey on Star Identification Algorithms <s> Search Time Reduced Much Further <s> The Inertial Stellar Compass (ISC) is a real-time, miniature, low power stellar inertial attitude determination system, composed of a wide field-of-view active pixel star camera and a microelectromechanical system (MEMS) gyro assembly, with associated processing and power electronics. The integrated technologies enable an attitude determination system with an accuracy of 0.1 degree (1 sigma) to be realized at very low power and volume. The attitude knowledge provided by the ISC is applicable to a wide range of space and earth science missions that may include the use of highly maneuverable, stabilized, tumbling, or lost spacecraft. Under the guidance of NASA’s New Millennium ST-6 project, Draper Laboratory is currently developing the Inertial Stellar Compass. Its completion and flight validation will represent a breakthrough in real-time miniature attitude determination sensors. This paper describes system design, development, and validation activities currently underway at Draper. <s> BIB003 </s> A Survey on Star Identification Algorithms <s> Search Time Reduced Much Further <s> : A new highly robust algorithm, called Pyramid, is presented to identify the stars observed by star trackers in the general lost-in-space case, where no a priori estimate of pointing is available. At the heart of the method is the k-vector approach for accessing the star catalog, which provides a searchless means to obtain all cataloged stars from the whole sky that could possibly correspond to a particular measured pair, given the measured interstar angle and the measurement precision. The Pyramid logic is built on the identification of a four-star polygon structure—the Pyramid—which is associated with an almost certain star identification. Consequently, the Pyramid algorithm is capable of identifying and discarding even a high number of spikes (false stars). The method, which has already been tested in space, is demonstrated to be highly efficient, extremely robust, and fast. All of these features are supported by simulations and by a few ground test experimental results. <s> BIB004 </s> A Survey on Star Identification Algorithms <s> Search Time Reduced Much Further <s> We present an algorithm for recovering the orientation (attitude) of a satellite-based camera. The algorithm matches stars in an image taken with the camera to stars in a star catalogue. The algorithm is based on a geometric voting scheme in which a pair of stars in the catalogue votes for a pair of stars in the image if the angular distance between the stars of both pairs is similar. As angular distance is a symmetric relationship, each of the two catalogue stars votes for each of the image stars. The identity of each star in the image is set to the identity of the catalogue star that cast the most votes. Once the identity of the stars is determined, the attitude of the camera is computed using a quaternion-based method. We further present a fast tracking algorithm that estimates the attitude for subsequent images after the first algorithm has terminated successfully. Our method runs in comparable speed to state of the art algorithms but is still more robust than them. The system has been implemented and tested on simulated data and on real sky images. <s> BIB005
Later in 1997, Mortari proposed an even faster database search technique, the "Search-Less Algorithm," (SLA). Mortari retained the approach of selecting any pair of stars in the field of view, O(b)-time, but he proposed using a "k-vector" to search the database in an amount of time independent of the size of the database . Figure 7 shows the k-vector construction for a 10-element database. The small horizontal lines are equally spaced and they give the k-vector values: 0, 2, 2, 3, 3, 5, 6, 8, 9, 10. The search time for a single star-pair would be O(k), where k is the number of possible star pairs with inter-star angles within the measurement tolerance. Unfortunately, the dominant time of the algorithm came when comparing multiple lists of stars returned for each inter-star angle. Since multiple stars had to have all their inter-star angles confirmed to be a match, the running time of the comparison would be O(bk 2 ), and b is the number of stars in the pattern required to guarantee uniqueness. Even though the resulting value of k would be a number based on the uncertainty associated with the inter-star angle measurement and the number of observable star pairs, Mortari had made the first important step in breaking the dependence of the database search time on the size of the database. Mortari's method could also reject a single non-star from a set of selected stars, without loosing the progress made in identifying the others. The resulting Search-Less Algorithm (SLA) was then successfully tested on orbit on an Indian satellite . A few years later, realizing that the robustness to non-star "spikes" was essential towards reducing the number of iterations of his algorithm, Mortari developed the "Pyramid" algorithm BIB004 which uses an optimal permutation algorithm to exploit the ability of his algorithm to select which stars to match. This permutation is written to minimize the time spent considering stars that don't match, fearing them to be non-star spikes. The code has been tested to reject non-stars in an image containing only five real star but with 63 non-stars thrown in. The Pyramid algorithm has been successfully tested on Draper's "Inertial Stellar Compass" star tracker BIB003 and on MIT's satellites HETE and HETE-2 . This algorithm is presently under exclusive contract to StarVision Technologies. Neural networks, have been proposed for use in Star ID as early as 1989, . In 2000, Hong , proposed using a neural network and fuzzy logic to identify the stars, as illustrated in Figure 8 . Hong used the popular ordered triple, based on star brightness, and fed the resulting angular separations into a neural network. While his feature extraction process runs very fast, O(1), he is forced to use a massively parallel architecture to implement the neural network. Though such techniques may be used with much success on ground-based systems, it is uncertain if this technique is the best for use in a system with limited electrical power, or that requires expensive radiation-tolerant hardware. Hong notes quite accurately that his algorithm performs much faster than some other of the mentioned algorithms, referencing Van Bezooijen, Quine and Ketchum, but failed to make a comparison with Mortari's method. Hong readily admits that his technique requires more than a quarter-million multiplications. Then in 2007, Guangjun proposed a feature extraction technique, similar to Liebe BIB001 , using the inter-star angles and the angle made by two stars relative to a central star. Though his feature extraction time is O(f lg b), he uses a linear database search, performing bit-by-bit comparisons, running in O(n) time. While Guangjun's claim is true that his algorithm runs faster than Padget's grid algorithm BIB002 , similarly to Hong , he fails to compare his algorithm to more-recent faster algorithms. In 2008, Kolomenkin BIB005 proposed a modification of the SLA algorithm to reduce the time spent cross-checking the results of the k-vector. In the original SLA algorithm, Mortari selects any four stars in the image and performs six k-vector searches to find six lists of approximately k = 100 candidate star pairs. The cross checks take O(k For the purpose of this analysis, the time for list insertion for keeping track of voting is assumed to be O(1), though in practice it is difficult to perform this step in less than O(lg k)-time. So Kolomenkin's modification would run asymptotically faster in systems for which f 2 < k. Since, in a given system, f tends to be on the order of 10 to 40 and k on the order of 100, it seems dubious the algorithm achieves any real decrease in asymptotic running time, most likely if any improvement is achieved, it is by a constant factor. In the paper, Kolomenkin did not provide any direct performance comparison to the unmodified SLA. For the reader's convenience, the major advances in asymptotic performance of Star-ID are listed in Table 1 .
A Survey on Star Identification Algorithms <s> Non-dimensional Algorithms <s> An efficient star pattern recognition algorithm is presented. The purpose of this algorithm is to make sure of the compatibility of the software and the imaging sensor noise level. The new CMOS APS sensors have not currently reached the same accuracy as the former CCD sensors in position as well as in magnitude determination, especially in the dynamic stages. This algorithm allows the system to recognize the star pattern 20% faster than with reference algorithms. No false recognition has been noticed. Used databases have a size 5 to 10 times smaller, depending on other reference algorithms. Oriented triangles are used to compare the measured star pattern with the catalogue stars. The triangle's characterization criteria propose several solutions in a first time. A unique solution is selected by means of identification and validation methods in a second time. First results, presented hereinafter, are very encouraging, and this algorithm may be used in the future APS star trackers. APS star tracker robustness is significantly enhanced by this method during the critical navigation phases <s> BIB001 </s> A Survey on Star Identification Algorithms <s> Non-dimensional Algorithms <s> Star identification is the most critical and important process for attitude estimation, given data from any star sensor. The main purpose of the Star Identification (Star-ID) process is to identify the observed/measured stars with the corresponding cataloged stars. The precision of the observed star directions highly depend on the calibrated accuracy of the star camera parameters, mainly the focal length f, and the optical axis offsets (x 0, y 0). When these parameters are not accurate or when the camera is not well calibrated, the proposed Nondimensional Star-ID method becomes very suitable, because it does not require accurate knowledge of these parameters. The Nondimensional Star-ID method represents a unique tool to identify the stars of uncalibrated or inaccurate parameters cameras. The basic idea derives the identification process from the observed focal plane angles which are, to first order, independent from both the focal length and the optical axis offsets. The adoption of the k-vector range search technique, makes this method very fast. Moreover, it is easy to implement, accurate, and the probability of failing Star-ID is less than 0.1% for typical star tracker design parameters. <s> BIB002
In 2003 Samaan, along with Junkins and Mortari BIB002 , presented a new Star-ID technique that was robust to calibration errors. For flight systems in which temperature variations would cause cyclic variations in the accuracy of the calibration, the new technique would promise to eliminate the ambiguity in matching star patterns. Instead of using the inter-star angles between stars in a triangle, Samaan used the triangle's interior angles, the angle between two stars, with a third star as a vertex. While the inter-star angles respond linearly to changes in temperature, the triangle interior angles are invariant in the first order of the distortion, as illustrated in Figure 9 . Samaan's technique uses the smallest and largest of the interior angles to place stars in a catalog, so the feature extraction time is O(lg b). The database is subsequently searched with Mortari's k-vector searching technique, taking O(k) time. Samaan's numerical tests found that at least five stars must be matched before the technique produces, which introduces a cross-checking routine, using O(b k 2 )-time. Samaan concludes the paper by using Star-ID to re-calibrate the camera. Rousseau also published a method in 2005 BIB001 , which he billed as being robust to errors introduced by new CMOS Active Pixel Sensors (APS). His metric is the sine of star-triangle interior angles, but instead of using any combination of stars, he used only the closest two stars, and used only one of the three (two independent) interior angles as a parameter. His pattern selection pattern means there is only one entry in the catalog for each star; so his catalog size is O(n). It also follows that the feature extraction time is O(f lg 2) = O(f ). Furthermore, Rousseau does not specify a method for selecting star triangles from the catalog, but according to his published parameter distribution, the fastest method available would be a binary-tree search, taking O(k lg n)-time. Rousseau then actually computes the attitude for each star triangle, and finds all the stars from the catalog that should be visible, which should take no less than O(f ), and more likely O(f lg n). Each observation is then transformed into Figure 9 . Distortion from calibration variations (reprinted from BIB002 ). the reference frame. The observed stars are then matched up with catalog stars, and the inter-star angles compared. The process by which this is done is not described, but likely takes O(f lg f )-time. The best of the matches of all the triangles is then selected. The final analytic time of Rousseau's algorithm is then O(kf lg f lg n). It is unclear whether Rousseau's performance data is on his original 45,000-star catalog, or another mentioned, reduced 1,300-star catalog, but his timed results are disappointing; all of his averages are longer than a second on a 650 MHz processor. Although the tests are performed in MATLAB, which unnecessarily increases the computation time, it is unclear why Rousseau claims the algorithm is fast from his reported data, and without any performance comparison to any other algorithm. Furthermore, he does not describe why his validation phase, which uses inter-star angles to reject incorrect matches, is more robust to APS-induced measurement errors, when the same inter-star angles are used by previous methods, like SLA. It seems likely that he simply used the smaller star catalog, in which larger measurement errors result in fewer incorrect matches. Rousseau's parameters, however, have the benefit that there is no ambiguity as to which star in the triangle is the listed star, as long as the star triangle does not contain nearly identical angles.
A Survey on Star Identification Algorithms <s> Recursive Star Identification <s> Star identification can be accomplished by several different available algorithms that identify the stars observed by a star tracker. However, efficiency and reliability remain key issues and the availability of new active pixel cameras requires new approaches. Two novel algorithms for recursive mode star identification are presented here. The first approach is derived by the spherical polygon search (SP-search) algorithm, it was used to access all the cataloged stars observed by the sensor field-of-view (FOV) and recursively add/remove candidate cataloged stars according to the predicted image motion induced by camera attitude dynamics. Star identification is then accomplished by a star pattern matching technique which identifies the observed stars in the reference catalog. The second method uses star neighborhood information and a catalog neighborhood pointer matrix to access the star catalog. In the recursive star identification process, and under the assumption of "slow" attitude dynamics, only the stars in the neighborhood of previously identified stars are considered for star identification in the succeeding frames. Numerical tests are performed to validate the absolute and relative efficiency of the proposed methods. <s> BIB001 </s> A Survey on Star Identification Algorithms <s> Recursive Star Identification <s> Two difierent theoretical approaches to the position determination problem are presented, one matrix-based and the other vector-based. Both approaches are designed for implementation in an autonomous Stellar Positioning System (SPS) that uses the following measurement sources: an astronomical camera, a clock, and a set of two inclinometers. Before presenting each of the algorithms, the reference frames utilized are deflned. The two position estimation techniques are then individually presented, followed by a discussion of real-world gravity and geometry model reflnements. <s> BIB002 </s> A Survey on Star Identification Algorithms <s> Recursive Star Identification <s> This paper will discuss the implementation of a Stellar Positioning System (SPS) as well as techniques for error mitigation in experimentation and data post-processing. The hardware used during the development and testing of the SPS will be described. Starcentroiding, star-identification, attitude estimation, and the local gravity vector were used by the SPS to determine latitude and longitude. Image filtering and attitude filtering are presented as techniques to further improve the capabilities of the system. Three focal length estimation methods are investigated and compared. The resulting prototype was tested in three dierent locations and the results demonstrate accuracy of the SPS to be within 50 meters over short time intervals. For centuries, the stars have been used as a means of position determination for navigation purposes. With today’s precise clocks and high quality imaging capabilities, it is possible to accurately determine position using similar methods to those used by early navigators. Taking advantage of these capabilities and highly precise star catalogs, the Stellar Positioning System (SPS) was developed as a modern application of ancient celestial navigation techniques. The basic methods necessary for simple position determination will be described along with the hardware used for data collection. While the theory has been developed to determine local latitude and longitude position from interstellar angles, 1 there are additional obstacles to overcome when implementing these concepts in hardware. Since position determination requires a large number of measurements, error creeps into the algorithm from several sources such as the physical environment and the hardware itself. Fortunately, many of these errors can be mitigated using a range of techniques. Results indicate that while the SPS is not ready to surpass GPS as the means of navigation on Earth, it can provide accurate position coordinates on other planets or moons that are not GPS-equipped. <s> BIB003
Samaan made other advances for recursive Star-ID in 2005 BIB001 . His key to reducing the recursive mode time was to speed the selection of stars that ought to be visible given some other visible stars. He presented two methods, one which used the Mortari's Spherical Polygon-Search (SP-Search) BIB002 BIB003 , which in turn used his k-vector, and the second which used a pre-built catalog of stars that should be visible if another star is visible, the Star Neighborhood Approach (SNA). The SP-Search uses a k-vectorattitude estimate to search for the presence of these predicted stars in the set of the camera's observed stars. Samaan's other method, the SNA, constructs a table ahead of time, of the six closest stars to any given star, presuming these stars to be the most likely to be visible if the first star is found. Samaan's method takes O(b)-time to find candidate stars, if b stars are identified by the LISA. It is uncertain how many successive iterations would be necessary to ensure that all the stars in the given field of view have been found, other than it is most likely bounded by O(f b).
A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> An approach to star identification based on comparing observed pattern statistics with the precomputed star cataloged statistics is suggested. The identification criterion is based on evaluating a posteriori probabilities of designated star sequences obtained from observing different star fields. Numerical results based on a specific algorithm are presented. A number of references for other approaches are cited. <s> BIB001 </s> A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> A bellcore-style latch with a safety lock feature, having a handle with a shank. The shank has an axial bore, with side apertures extending through the shank into the bore. A lock plug is positioned in the bore of the handle, and has a head and a shaft. The shaft has recessed portions and unrecessed portions, and the lock plug is rotatable to move the lock plug between a locked position, wherein the unrecessed portions are in alignment with the side apertures to cause the ball bearings located therein to protrude from the side apertures, and an unlocked position, wherein the unrecessed portions are in alignment with the side apertures permitting the ball bearings to retract into the side apertures. A spring is used to bias the lock plug to its locked position. An escutcheon with an aperture is provided for receiving the shank portion. The aperture has pockets for the ball bearings. When the handle is in a closed position and the lock plug is in the locked position, the unrecessed portions of the shaft are aligned with the side apertures and the pockets and the ball bearings are protruded into the pockets, thereby preventing turning of the handle. When the lock plug is turned by a lock key to the unlocked position, the recessed portions of the shaft are aligned with the side apertures and the ball bearings can retract from the pockets, thereby permitting the handle to be turned and opened. <s> BIB002 </s> A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> A transmission format signal is recorded on a recording medium capable of recording a large volume of transmission format signals with no occurrence of any redundant part. The recording medium is closely filled with transport packets each of 188 bytes in size, composing together an MPEG2 transport stream in such a manner that no redundant part will exist in each sector of 2048 bytes in size. <s> BIB003 </s> A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> Typical GN&C solutions for precision pointing spacecraft require a pair of orthogonally oriented star sensors for attitude estimation and three-axis gyroscopes to provide angular rate information. The combined sensor suite however, leads to large weight, volume and power consumption. The SG100 solution seeks to eliminate the need for a separate gyroscope sensor by deriving angular velocities based on star measurements alone. The SG100 is capable of acquiring images of stars as faint as visual magnitude of 10.0 at exposure times less than 10ms and contains proprietary robust star identification, centroiding, attitude estimation and filtering algorithms to estimate precision attitude and drift-free angular rates at 100Hz. In this paper we report the development and testing of the SG100 engineering model. Results from the radiation testing (total ionizing dose) of critical components, night sky sensitivity, noise and angular rate measurement tests are presented. Test results show that the SG100 exceeds the requirements on the star sensitivity and noise equivalent angles while providing an accurate estimation of the angular rates. <s> BIB004 </s> A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> Two difierent theoretical approaches to the position determination problem are presented, one matrix-based and the other vector-based. Both approaches are designed for implementation in an autonomous Stellar Positioning System (SPS) that uses the following measurement sources: an astronomical camera, a clock, and a set of two inclinometers. Before presenting each of the algorithms, the reference frames utilized are deflned. The two position estimation techniques are then individually presented, followed by a discussion of real-world gravity and geometry model reflnements. <s> BIB005 </s> A Survey on Star Identification Algorithms <s> Star Trackers for Different Applications <s> This paper will discuss the implementation of a Stellar Positioning System (SPS) as well as techniques for error mitigation in experimentation and data post-processing. The hardware used during the development and testing of the SPS will be described. Starcentroiding, star-identification, attitude estimation, and the local gravity vector were used by the SPS to determine latitude and longitude. Image filtering and attitude filtering are presented as techniques to further improve the capabilities of the system. Three focal length estimation methods are investigated and compared. The resulting prototype was tested in three dierent locations and the results demonstrate accuracy of the SPS to be within 50 meters over short time intervals. For centuries, the stars have been used as a means of position determination for navigation purposes. With today’s precise clocks and high quality imaging capabilities, it is possible to accurately determine position using similar methods to those used by early navigators. Taking advantage of these capabilities and highly precise star catalogs, the Stellar Positioning System (SPS) was developed as a modern application of ancient celestial navigation techniques. The basic methods necessary for simple position determination will be described along with the hardware used for data collection. While the theory has been developed to determine local latitude and longitude position from interstellar angles, 1 there are additional obstacles to overcome when implementing these concepts in hardware. Since position determination requires a large number of measurements, error creeps into the algorithm from several sources such as the physical environment and the hardware itself. Fortunately, many of these errors can be mitigated using a range of techniques. Results indicate that while the SPS is not ready to surpass GPS as the means of navigation on Earth, it can provide accurate position coordinates on other planets or moons that are not GPS-equipped. <s> BIB006
Although Star-ID is predominantly used for attitude determination, it can be used for other spacecraft related tasks. Here are some examples: 1. Star Gyros. With appropriate algorithms, images from star cameras may also be used for estimating the angular velocity of the spacecraft BIB004 . 2. Space Surveillance. It can be used for space situational awareness to estimate the orbit of other visible spacecraft BIB003 . 3. Space Navigation. If placed on an interplanetary probe, it could observe visible planets and estimate the location of the probe . 4. Positioning System. If carried on a planet or moon, it could be used to estimate its position on the body when combined with a clock and two inclinometers BIB005 BIB006 . Interesting research have been carried out to increase star sensor accuracy as well as to simplify the Star-ID problem. Here are two examples: 1. Multiple Fields-of-View system. While attitude determination from a single star camera image produces very accurate information about the direction of the camera boresight, the estimate of the rotation about the camera's boresight axis is less accurate. In order to solve this problem, a second star camera is sometimes used. There is another method, which uses a single star camera to record a combination of multiple star images simultaneously. For a two fields-of-view camera the light are preferentially smeared by the optics (e.g., by adding astigmatism) so that stars from one aperture are smeared in a horizontal direction in the image plane, while light from the other aperture is smeared in the vertical direction . Image filtering algorithms can detect the direction of the smearing and separate the stars according to which aperture they entered. If, however, the Star-ID technique is very robust to the presence of non-stars, the Star-ID algorithm may be run many times on the same image, perhaps on stars from three apertures, all in orthogonal directions BIB002 . In these cases it is possible to separate the stars without the need for smearing the stars in a given direction. 2. Techniques requiring multiple images as well as attitude maneuvers have been implemented BIB001 . 3. Uniform Star Catalog. In order to develop optimized star sensing and star identification with respect to continuous operation and reliability, the concept of star catalogs with near uniform angular spacing between stars has been proposed . These catalogs are not characterized by constant magnitude cutoffs. They are reference star catalogs where the expectation of the number of stars that fall in a given field of view is approximately constant (i.e. 5 or 6) (minimum standard deviation), independently which region of the sky the sensor optical axis is pointing.
A Survey on Security Metrics <s> Security metrics is <s> An organized record of actual flaws can be useful to computer system designers, programmers, analysts, administrators, and users. This survey provides a taxonomy for computer program security flaws, with an Appendix that documents 50 actual security flaws. These flaws have all been described previously in the open literature, but in widely separated places. For those new to the field of computer security, they provide a good introduction to the characteristics of security flaws and how they can arise. Because these flaws were not randomly selected from a valid statistical sample of such flaws, we make no strong claims concerning the likely distribution of actual security flaws within the taxonomy. However, this method of organizing security flaw data can help those who have custody of more representative samples to organize them and to focus their efforts to remove and, eventually, to prevent the introduction of security flaws. <s> BIB001 </s> A Survey on Security Metrics <s> Security metrics is <s> More than 100 years ago, Lord Kelvin insightfully observed that measurement is vital to deep knowledge and understanding in physical science. During the last few decades, researchers have made various attempts to develop measures and systems of measurement for computer security with varying degrees of success. This paper provides an overview of the security metrics area and looks at possible avenues of research that could be pursued to advance the state of the art. <s> BIB002 </s> A Survey on Security Metrics <s> Security metrics is <s> System architects need quantitative security metrics to make informed trade-off decisions involving system security. The security metrics need to provide insight on weak points in the system defense, considering characteristics of both the system and its adversaries. To provide such metrics, we formally define the ADversary View Security Evaluation (ADVISE) method. Our approach is to create an executable state-based security model of a system and an adversary that represents how the adversary is likely to attack the system and the results of such an attack. The attack decision function uses information about adversary attack preferences and possible attacks against the system to mimic how the adversary selects the most attractive next attack step. The adversary's decision involves looking ahead some number of attack steps. System architects can use ADVISE to compare the security strength of system architecture variants and analyze the threats posed by different adversaries. We demonstrate the feasibility and benefits of ADVISE using a case study. To produce quantitative model-based security metrics, we have implemented the ADVISE method in a tool that facilitates user input of system and adversary data and automatically generates executable models. <s> BIB003 </s> A Survey on Security Metrics <s> Security metrics is <s> The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware. <s> BIB004 </s> A Survey on Security Metrics <s> Security metrics is <s> The evaluation of computer intrusion detection systems (which we refer to as intrusion detection systems) is an active research area. In this article, we survey and systematize common practices in the area of evaluation of such systems. For this purpose, we define a design space structured into three parts: workload, metrics, and measurement methodology. We then provide an overview of the common practices in evaluation of intrusion detection systems by surveying evaluation approaches and methods related to each part of the design space. Finally, we discuss open issues and challenges focusing on evaluation methodologies for novel intrusion detection systems. <s> BIB005 </s> A Survey on Security Metrics <s> Security metrics is <s> Run-time packers are often used by malware-writers to obfuscate their code and hinder static analysis. The packer problem has been widely studied, and several solutions have been proposed in order to generically unpack protected binaries. Nevertheless, these solutions commonly rely on a number of assumptions that may not necessarily reflect the reality of the packers used in the wild. Moreover, previous solutions fail to provide useful information about the structure of the packer or its complexity. In this paper, we describe a framework for packer analysis and we propose a taxonomy to measure the runtime complexity of packers. We evaluated our dynamic analysis system on two datasets, composed of both off-the-shelf packers and custom packed binaries. Based on the results of our experiments, we present several statistics about the packers complexity and their evolution over time. <s> BIB006
. The security metrics problem certainly has received a lot of attention, including government and industry bodies [Chew et al. ; (IATAC) 2009; Institute ; for Internet Security 2010] . For example, the United States National Institute of Standards and Technology proposed three categories of security metrics-implementation, effectiveness, and impact [Chew et al. ] ; the Center for Internet Security defined 28 security metrics in another three categories-management, operational, and technical [for Internet Security 2010] . However, these efforts are almost exclusively geared towards cyber defense administrations and operations. They neither discuss how the security metrics may be used as parameters in security modeling (i.e., theoretical use of security metrics), nor discuss what the gaps are between the state-of-the-art and the ultimate goals and how these gaps may be bridged. This motivates us to survey the knowledge in the field, while hoping to shed some light on the difficulties of the problem and the directions for future research. To the best of our knowledge, this is the first survey of security metrics, despite that there have been some efforts with a much narrower focus (e.g., BIB001 Chandola et al. 2009; BIB005 BIB004 BIB006 ). The paper is organized as follows. Section 2 discusses the scope and methodology of the survey. Section 3 describes security metrics for measuring system vulnerabilities. Section 4 reviews security metrics for measuring defenses. Section 5 presents security metrics for measuring threats. Section 6 describes security metrics for measuring situations. Section 7 discusses the gaps between the state-of-the-art and the security metrics that are desirable. Section 8 concludes the paper. The term security metrics has a range of meanings, with no widely accepted definition BIB002 ]. It is however intuitive that security metrics reflect some security attributes quantitatively. Throughout the paper, the term systems is used in a broad sense, and is used in contrast to the term building-blocks, used to describe concepts such as cryptographic primitives. The discussion in the present paper applies to two kinds of systems: (i) enterprise systems, which include networked systems of multiple computers/devices (e.g., company networks), clouds, and even the entire cyberspace, and (ii) computer systems, which represent individual computers/devices. This distinction is important because an enterprise system consists of many computers/devices, and measuring security of an enterprise system naturally requires to measure security of the individual computers. The term attacking computer represents a computer or IP address from which cyber attacks are launched against others, while noting that the attacking computer itself may be a compromised one (i.e., not owned by a human attacker). The term incident represents a successful attack (e.g., malware infection or data breach). For applications of security metrics, we will focus on two uses. The theoretical use is to incorporate security metrics as parameters into some security models that may be built to understand security from a more holistic perspective. There have been some initial studies in pursuing such models, such as BIB003 , which often aim to characterize the evolution of the global security state. The practical use is to guide daily security practice, such as comparing the security of two systems and comparing the security of one system during two different periods of time (e.g., last year vs. present year).
A Survey on Security Metrics <s> Scope <s> A metric is proposed for quantifying leakage of information about secrets and about how secrets change over time. The metric is used with a model of information flow for probabilistic, interactive systems with adaptive adversaries. The model and metric are implemented in a probabilistic programming language and used to analyze several examples. The analysis demonstrates that adaptivity increases information flow. <s> BIB001 </s> A Survey on Security Metrics <s> Scope <s> Evoked by the increasing need to integrate side-channel countermeasures into security-enabled commercial devices, evaluation labs are seeking a standard approach that enables a fast, reliable and robust evaluation of the side-channel vulnerability of the given products. To this end, standardization bodies such as NIST intend to establish a leakage assessment methodology fulfilling these demands. One of such proposals is the Welch's t-test, which is being put forward by Cryptography Research Inc., and is able to relax the dependency between the evaluations and the device's underlying architecture. In this talk the theoretical background of the test's different flavors are reviewed, and a roadmap is presented that can be followed by the evaluation labs to efficiently and correctly conduct the tests. More precisely, a stable, robust and efficient way to perform the tests at higher orders is expressed. Further, the test is extended to multivariate settings, and details on how to efficiently and rapidly carry out such a multivariate higher-order test are provided. <s> BIB002
We have to limit the scope of the literature that is surveyed in the present paper. This is because every security paper that improves upon a previous result-be it a better defense or more powerful attack-would be considered relevant in terms of security metrics. However, most security publications did not address the security metrics perspective, perhaps because it is sufficient to show, for example, a newly proposed defense can defeat an attack that could not be defeated by previous defenses. This suggests us to survey the literature that made a reasonable effort at defining security metrics. This selection criterion is certainly subjective, but we hope the readers find the resulting survey and discussion informative. It is worth mentioning that our focus is on security metrics, rather than the specific approaches for analyzing them. We treat the analysis approaches as an orthogonal issue because a security metric may be analyzed via multiple approaches. Even within the scope discussed above, we still need to narrow down our focus. This is because security, and security metrics thereof, can be discussed at multiple levels of abstractions, including systems and building-blocks as mentioned above. For building-blocks, great success has been achieved in measuring the concrete security of cryptographic primitives [Bellare et al. ] , while other notable results include metrics for measuring privacy [Dwork ; Shokri et al. ] , information flow BIB001 , side-channel leakage BIB002 , and hardware security . On the other hand, our understanding of security metrics for measuring security of systems lags far behind, as the present paper shows. One thing that is worth clarifying is that the exposure of cryptographic keys, due to the use of weak randomness in the key generation algorithm or Heartbleed-like at-tacks, is treated as a systems security problem. This is plausible because the formal framework for analyzing cryptographic security assumes that the cryptographic keys are not exposed. The aforementioned discrepancy between the metrics for systems security and the metrics for building-blocks security is unacceptable because the former is often needed and used in the process of business decision-making. This suggests us to focus on systemizing the underdeveloped field of systems security metrics. The importance of this underdeveloped field can be seen by efforts that have been made by government and industrial bodies (IATAC) 2009; Institute ; for Internet Security 2010] . This prompts us to consider both these metrics and those that appeared in academic venues.
A Survey on Security Metrics <s> Measuring system users' vulnerabilities <s> In this paper we present the results of a roleplay survey instrument administered to 1001 online survey respondents to study both the relationship between demographics and phishing susceptibility and the effectiveness of several anti-phishing educational materials. Our results suggest that women are more susceptible than men to phishing and participants between the ages of 18 and 25 are more susceptible to phishing than other age groups. We explain these demographic factors through a mediation analysis. Educational materials reduced users' tendency to enter information into phishing webpages by 40% percent; however, some of the educational materials we tested also slightly decreased participants' tendency to click on legitimate links. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring system users' vulnerabilities <s> The home computer user is often said to be the weakest link in computer security. They do not always follow security advice, and they take actions, as in phishing, that compromise themselves. In general, we do not understand why users do not always behave safely, which would seem to be in their best interest. This paper reviews the literature of surveys and studies of factors that influence security decisions for home computer users. We organize the review in four sections: understanding of threats, perceptions of risky behavior, efforts to avoid security breaches and attitudes to security interventions. We find that these studies reveal a lot of reasons why current security measures may not match the needs or abilities of home computer users and suggest future work needed to inform how security is delivered to this user group. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring system users' vulnerabilities <s> The success of malicious software (malware) depends upon both technical and human factors. The most security conscious users are vulnerable to zero-day exploits; the best security mechanisms can be circumvented by poor user choices. While there has been significant research addressing the technical aspects of malware attack and defense, there has been much less research reporting on how human behavior interacts with both malware and current malware defenses. In this paper we describe a proof-of-concept field study designed to examine the interactions between users, anti-virus (anti-malware) software, and malware as they occur on deployed systems. The 4-month study, conducted in a fashion similar to the clinical trials used to evaluate medical interventions, involved 50 subjects whose laptops were instrumented to monitor possible infections and gather data on user behavior. Although the population size was limited, this initial study produced some intriguing, non-intuitive insights into the efficacy of current defenses, particularly with regards to the technical sophistication of end users. We assert that this work shows the feasibility and utility of testing security software through long-term field studies with greater ecological validity than can be achieved through other means. <s> BIB003
One metric is user's susceptibility to phishing attacks BIB001 . This online study of 1,001 users shows that phishing education can reduce the user's susceptibility to phishing attacks and that young people (18 to 25 years old) are more susceptible to phishing attacks. This metric is measured via the false-positive rate that a user treats legitimate email or website as a phish, and the false-negative rate that a user treats a phishing email or website as legitimate and subsequently clicks the link in the email or submits information to the website. Another metric is user's susceptibility to malware infection [Lalonde Levesque et al. ] . This clinical study of interactions between human users, anti-malware software, and malware involves 50 users, who monitor their laptops for possible infections during a period of 4 months. During this period of time, 38% of users are found to be exposed to malware, which indicates the value of the anti-malware tool (because these laptops would have been infected if anti-malware software was not used). The study also shows that user demographics (e.g., gender, age) are not significant factors in determining a user's susceptibility to malware infection, which contradicts the aforementioned finding in regards to users' susceptibility to phishing attacks BIB001 . Nevertheless, it is interesting to note that (i) users installing many applications are more susceptible to malware infections, because the chance of installing malicious applications is higher, and (ii) users visiting many websites are more susceptible to malware infections, because some websites are malicious [Lalonde Levesque et al. ] . It is important to understand and measure the degrees of users' susceptibilities to each individual class of attacks and to multiple classes of attacks collectively (e.g., multiple forms of social-engineering attacks). For this purpose, research needs to be conducted to quantify how the susceptibilities are dependent upon factors that affect users' security decisions (e.g., personality such as high vs. low attention control ). This area is little understood BIB002 BIB001 BIB003 , but the reward is high. For the theoretical use of security metrics, these metrics can be incorporated into security models as parameters to model (e.g.) the time or effort that is needed in order for an attacker to exploit user vulnerabilities to compromise a computer or to penetrate into an enterprise system. For the practical use of security metrics, these metrics can be used to tailor defenses for individual users (e.g., a careless employee may have to go through some security proxy in order to access Internet websites). It would be appropriate to say that being able to measure these security metrics is as important as being able to measure individual users' susceptibility to cancers because of (e.g.) her genes. As the ability to quantify an individual's predisposition to diseases can lead to proactive treatment, the ability to quantify security can lead to tailored and more effective defenses.
A Survey on Security Metrics <s> Measuring password vulnerabilities <s> In this paper we attempt to determine the effectiveness of using entropy, as defined in NIST SP800-63, as a measurement of the security provided by various password creation policies. This is accomplished by modeling the success rate of current password cracking techniques against real user passwords. These data sets were collected from several different websites, the largest one containing over 32 million passwords. This focus on actual attack methodologies and real user passwords quite possibly makes this one of the largest studies on password security to date. In addition we examine what these results mean for standard password creation policies, such as minimum password length, and character set requirements. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring password vulnerabilities <s> We report on the largest corpus of user-chosen passwords ever studied, consisting of anonymized password histograms representing almost 70 million Yahoo! users, mitigating privacy concerns while enabling analysis of dozens of subpopulations based on demographic factors and site usage characteristics. This large data set motivates a thorough statistical treatment of estimating guessing difficulty by sampling from a secret distribution. In place of previously used metrics such as Shannon entropy and guessing entropy, which cannot be estimated with any realistically sized sample, we develop partial guessing metrics including a new variant of guesswork parameterized by an attacker's desired success rate. Our new metric is comparatively easy to approximate and directly relevant for security engineering. By comparing password distributions with a uniform distribution which would provide equivalent security against different forms of guessing attack, we estimate that passwords provide fewer than 10 bits of security against an online, trawling attack, and only about 20 bits of security against an optimal offline dictionary attack. We find surprisingly little variation in guessing difficulty; every identifiable group of users generated a comparably weak password distribution. Security motivations such as the registration of a payment card have no greater impact than demographic factors such as age and nationality. Even proactive efforts to nudge users towards better password choices with graphical feedback make little difference. More surprisingly, even seemingly distant language communities choose the same weak passwords and an attacker never gains more than a factor of 2 efficiency gain by switching from the globally optimal dictionary to a population-specific lists. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring password vulnerabilities <s> Text-based passwords remain the dominant authentication method in computer systems, despite significant advancement in attackers' capabilities to perform password cracking. In response to this threat, password composition policies have grown increasingly complex. However, there is insufficient research defining metrics to characterize password strength and using them to evaluate password-composition policies. In this paper, we analyze 12,000 passwords collected under seven composition policies via an online study. We develop an efficient distributed method for calculating how effectively several heuristic password-guessing algorithms guess passwords. Leveraging this method, we investigate (a) the resistance of passwords created under different conditions to guessing, (b) the performance of guessing algorithms under different training sets, (c) the relationship between passwords explicitly created under a given composition policy and other passwords that happen to meet the same requirements, and (d) the relationship between guess ability, as measured with password-cracking algorithms, and entropy estimates. Our findings advance understanding of both password-composition policies and metrics for quantifying password security. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring password vulnerabilities <s> We propose several possible metrics for measuring the strength of an individual password or any other secret drawn from a known, skewed distribution. In contrast to previous ad hoc approaches which rely on textual properties of passwords, we consider the problem without any knowledge of password structure. This enables rating the strength of a password given a large sample distribution without assuming anything about password semantics. We compare the results of our generic metrics against those of the NIST metrics and other previous "entropy-based" metrics for a large password dataset, which suggest over-fitting in previous metrics. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring password vulnerabilities <s> Parameterized password guessability--how many guesses a particular cracking algorithm with particular training data would take to guess a password--has become a common metric of password security. Unlike statistical metrics, it aims to model real-world attackers and to provide per-password strength estimates. We investigate how cracking approaches often used by researchers compare to real-world cracking by professionals, as well as how the choice of approach biases research conclusions. ::: ::: We find that semi-automated cracking by professionals outperforms popular fully automated approaches, but can be approximated by combining multiple such approaches. These approaches are only effective, however, with careful configuration and tuning; in commonly used default configurations, they underestimate the real-world guessability of passwords. We find that analyses of large password sets are often robust to the algorithm used for guessing as long as it is configured effectively. However, cracking algorithms differ systematically in their effectiveness guessing passwords with certain common features (e.g., character substitutions). This has important implications for analyzing the security of specific password characteristics or of individual passwords (e.g., in a password meter or security audit). Our results highlight the danger of relying only on a single cracking algorithm as a measure of password strength and constitute the first scientific evidence that automated guessing can often approximate guessing by professionals. <s> BIB005
The parameterized password guessability metric measures the number of guesses an attacker with a particular cracking algorithm (i.e., a particular threat model) needs to make before recovering a password BIB001 BIB002 BIB003 BIB005 . This metric is easier to use than earlier metrics such as password entropy , which cannot tell which passwords are easier to crack than others, and statistical password guessability BIB004 BIB002 BIB003 , which is more appropriate for evaluating passwords as a whole (rather than for evaluating them individually). The parameterized password guessability metric should be used with caution if a single password cracking algorithm is used, because different cracking algorithms can have very different strategies with varying results [Ur et al. ] . When the defender is uncertain about the threat model, multiple cracking strategies need to be considered. For both theoretical and practical uses of password vulnerability metrics, we might need to consider the worst-case and/or the average-case parameterized password guessabilities. This is one of the few sub-categories of security metrics that are better understood.
A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> Targeted attacks on civil society and nongovernmental organizations have gone underreported despite the fact that these organizations have been shown to be frequent targets of these attacks. In this paper, we shed light on targeted malware attacks faced by these organizations by studying malicious e-mails received by 10 civil society organizations (the majority of which are from groups related to China and Tibet issues) over a period of 4 years. ::: ::: Our study highlights important properties of malware threats faced by these organizations with implications on how these organizations defend themselves and how we quantify these threats. We find that the technical sophistication of malware we observe is fairly low, with more effort placed on socially engineering the e-mail content. Based on this observation, we develop the Targeted Threat Index (TTI), a metric which incorporates both social engineering and technical sophistication when assessing the risk of malware threats. We demonstrate that this metric is more effective than simple technical sophistication for identifying malware threats with the highest potential to successfully compromise victims. We also discuss how education efforts focused on changing user behaviour can help prevent compromise. For two of the three Tibetan groups in our study simple steps such as avoiding the use of email attachments could cut document-based malware threats delivered through e-mail that we observed by up to 95%. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> Repressive nation-states have long monitored telecommunications to keep tabs on political dissent. The Internet and online social networks, however, pose novel technical challenges to this practice, even as they open up new domains for surveillance. We analyze an extensive collection of suspicious files and links targeting activists, opposition members, and nongovernmental organizations in the Middle East over the past several years. We find that these artifacts reflect efforts to attack targets' devices for the purposes of eavesdropping, stealing information, and/or unmasking anonymous users. We describe attack campaigns we have observed in Bahrain, Syria, and the United Arab Emirates, investigating attackers, tools, and techniques. In addition to off-the-shelf remote access trojans and the use of third-party IP-tracking services, we identify commercial spyware marketed exclusively to governments, including Gamma's FinSpy and Hacking Team's Remote Control System (RCS). We describe their use in Bahrain and the UAE, and map out the potential broader scope of this activity by conducting global scans of the corresponding command-and-control (C&C) servers. Finally, we frame the real-world consequences of these campaigns via strong circumstantial evidence linking hacking to arrests, interrogations, and imprisonment. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> Infrastructure as a Service (IaaS) cloud has been attracting more and more customers as it provides the highest level of flexibility by offering configurable virtual machines (VMs) and computing infrastructures. Public VM images are usually available for customers to customize and launch. However, the 1 to N mapping between VM images and running instances in IaaS makes vulnerabilities propagate rapidly across the entire public cloud. Besides, IaaS cloud naturally comes with a larger and more stable attack surface and more concentrated target resources than traditional surroundings. In this paper, we first identify the threat of exploiting prevalent vulnerabilities over public IaaS cloud with an empirical study in Amazon EC2. We find that attackers can compromise a considerable number of VMs with trivial cost. We then do a qualitative cost-effectiveness analysis of this threat. Our main result is a two-fold observation: in IaaS cloud, exploiting prevalent vulnerabilities is much more cost-effective than traditional in-house computing environment, therefore attackers have stronger incentive; Fortunately, on the other hand, cloud defenders (cloud providers and customers) also have much lower cost-loss ratio than in traditional environment, therefore they can be more effective for defending attacks. We then build a game-theoretic model and conduct a risk-gain analysis to compare exploiting and patching strategies under cloud and traditional computing environments. Our modeling indicates that under cloud environment, both attack and defense become less cost-effective as time goes by, and the earlier actioner can be more rewarding. We propose countermeasures against such threat in order to bridge the gap between current security situation and defending mechanisms. To our best knowledge, we are the first to analyze and model the threat with prevalent known-vulnerabilities in public cloud. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> In recent years, the number of software vulnerabilities discovered has grown significantly. This creates a need for prioritizing the response to new disclosures by assessing which vulnerabilities are likely to be exploited and by quickly ruling out the vulnerabilities that are not actually exploited in the real world. We conduct a quantitative and qualitative exploration of the vulnerability-related information disseminated on Twitter. We then describe the design of a Twitter-based exploit detector, and we introduce a threat model specific to our problem. In addition to response prioritization, our detection techniques have applications in risk modeling for cyber-insurance and they highlight the value of information provided by the victims of attacks. <s> BIB005 </s> A Survey on Security Metrics <s> Measuring software vulnerability temporal characteristics. Temporal characteristics of software vulnerabilities include their evolution and lifetime. <s> Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch. <s> BIB006
Measuring evolution of software vulnerabilities. The historical vulnerability metric measures the degree that a system is vulnerable, or the number of vulnerabilities, in the past ]. The future vulnerability metric measures the number of vulnerabilities that will be discovered during a future period of time . Interesting variants of these metrics include historical exploited vulnerabilities, namely the number of vulnerabilities that were exploited in the past, and future exploited vulnerabilities, namely the number of vulnerabilities that will be exploited during a future period of time. The tendencyto-be-exploited metric measures the tendency that a vulnerability may be exploited, where the "tendency" may be computed from (e.g.) the information that was posted on Twitter before vulnerability disclosures BIB005 . This metric may be used to prioritize vulnerabilities for patching. Measuring software vulnerability lifetime. It is ideal that each vulnerability is immediately patched upon its disclosure. Despite the enforcement of patching policies, some vulnerabilities may never get patched. The vulnerability lifetime metric measures how long it takes to patch a vulnerability since its disclosure. Different vulnerability lifetimes may be exhibited at the client-end, the server-end, and the cloud-end. Client-end vulnerabilities are often exploited to launch targeted attacks (e.g., spear-fishing) BIB001 BIB002 . These vulnerabilities are hard to patch completely because of their prevalence (i.e., a vulnerability may appear in multiple programs) BIB006 . A study conducted in year 2010 [Frei and Kristensen 2010] shows that 50% of 2 million Windows users in question are exposed to 297 vulnerabilities over a period of 12 months. A more recent study BIB006 shows that despite the presence of 13 automated patching mechanisms (other than the Windows update), the median fraction of computers that are patched when exploits are available is no greater than 14%, the median time for patching 50% of vulnerable computers is 45 days after disclosure. One would think that server-end vulnerabilities are more rapidly patched than client-end ones. Let us consider the disclosure of two severe vulnerabilities in OpenSSL. First, for the pseudorandom-number-generation vulnerability in Debian Linux's OpenSSL, a study [Yilek et al. ] shows that 30% of the computers that were vulnerable 4 days after disclosure remain vulnerable almost 180 days later (i.e., 184 days after disclosure). This is somewhat surprising because the private keys generated by the vulnerable computers might have been exposed to the attacker. Second, for the Heartbleed vulnerability in OpenSSL that can be remotely exploited to read a vulnerable server's sensitive memory that may contain cryptographic keys and passwords, a study BIB003 estimates that 24%-55% of the HTTPS servers in Alexa's Top 1 Million websites were initially vulnerable. Moreover, 11% of the HTTPS servers in Alexa's Top 1 Million remain vulnerable 2 days after disclosure, and 3% of the HTTPS servers in Alexa's Top 1 Million were still vulnerable 60 days after disclosure. One may think that vulnerabilities in the cloud are well managed, perhaps because cloud users can run public virtual machine images (in addition to their own images). A study BIB004 shows that many of the 6,000 public Amazon Machine Images (AMIs) offered by Amazon Web Services (AWS) Elastic Compute Cloud (EC2), contain a considerable number of vulnerabilities, and that Amazon typically notifies cloud users about vulnerabilities 14 days after their disclosure. Summarizing the temporal metrics discussed above, we observe that defenders need to do a substantially better job at reducing the lifetime of software vulnerabilities after disclosure. Because vulnerability lifetime may never be reduced to 0, it is important to know the vulnerability vector V (t) or v i (t) at any time t. For using vulnerability lifetime in security modeling, we need to know its statistical distribution and how the distribution is dependent upon various factors.
A Survey on Security Metrics <s> Measuring software vulnerability severity. <s> The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring software vulnerability severity. <s> More than 100 years ago, Lord Kelvin insightfully observed that measurement is vital to deep knowledge and understanding in physical science. During the last few decades, researchers have made various attempts to develop measures and systems of measurement for computer security with varying degrees of success. This paper provides an overview of the security metrics area and looks at possible avenues of research that could be pursued to advance the state of the art. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring software vulnerability severity. <s> In this paper, we question the common practice of assigning security impact ratings to OS updates. Specifically, we present evidence that ranking updates by their perceived security importance, in order to defer applying some updates, exposes systems to significant risk. ::: ::: We argue that OS vendors and security groups should not focus on security updates to the detriment of other updates, but should instead seek update technologies that make it feasible to distribute updates for all disclosed OS bugs in a timely manner. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring software vulnerability severity. <s> (U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction. <s> BIB004
This metric measures the degree of damage that can be caused by the exploitation of a vulnerability. A popular example is the CVSS score, which considers the following three factors [of Incident Response and (FIRST) ]. The base score reflects the vulnerability's timeand environment-invariant characteristics, such as its access condition, the complexity to exploiting it, and the impact once exploited. The temporal and environmental scores reflect its time-and environment-dependent characteristics. Another example is the availability of exploits in black markets [Bilge and Dumitras ] , which is interesting because the public release of vulnerabilities is often followed by the increase of exploits. However, many vulnerabilities have the same CVSS scores BIB002 BIB004 . The practice of using CVSS scores (or base scores) to prioritize the patching of vulnerabilities has been considered both harmful, because information about low-severity bugs can lead to the development of high-severity attacks BIB004 BIB003 BIB001 , and ineffective, because patching a vulnerability solely because of its high CVSS score makes no difference than patching vulnerabilities randomly BIB004 . For practical use, it would be ideal if we can precisely define the intuitive metric of patching priority. For theoretical use, it would be ideal if we can quantify the global damage of a vulnerability to an enterprise system upon its exploitation, which may in turn help measure the patching priority.
A Survey on Security Metrics <s> Measuring the individual detection power. <s> We present a method of analysis for evaluating intrusion detection systems. The method can be used to compare the performance of intrusion detectors, to evaluate performance goals for intrusion detectors, and to determine the best configuration of an intrusion detector for a given environment. The method uses a decision analysis that integrates and extends ROC (receiver operating characteristics) and cost analysis methods to provide an expected cost metric. We provide general results and illustrate the method in several numerical examples that cover a range of detectors that meet a performance goal and two actual detectors operating in a realistic environment. We demonstrate that, contrary to common advice, the value of an intrusion detection system and the optimal operation of that system depend not only on the system's ROC curve, but also on cost metrics and the hostility of the operating environment as summarized by the probability of intrusion. Extensions of the method are outlined, and conclusions are drawn. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the individual detection power. <s> The evaluation of computer intrusion detection systems (which we refer to as intrusion detection systems) is an active research area. In this article, we survey and systematize common practices in the area of evaluation of such systems. For this purpose, we define a design space structured into three parts: workload, metrics, and measurement methodology. We then provide an overview of the common practices in evaluation of intrusion detection systems by surveying evaluation approaches and methods related to each part of the design space. Finally, we discuss open issues and challenges focusing on evaluation methodologies for novel intrusion detection systems. <s> BIB002
For instrument-based attack detection, the detection time metric measures the delay between the time t 0 at which a compro-mised computer sends its first scan packet and the time t that a scan packet is observed by the instrument . This metric depends on several factors, including malware spreading model, the distribution of vulnerable computers, the size of the monitored IP address space, and the locations of the instrument. For intrusion detection systems (including anomaly-based, host-based, and networkbased), the initial set of metrics for measuring their detection power are: The truepositive rate, denoted by Pr(A|I), is defined as the probability that an intrusion (I) is detected as an alert that indicates attack (A). The false-negative rate, denoted by Pr(¬A|I), is defined as the probability that an intrusion is not detected as an attack. The true-negative rate, denoted by Pr(¬A|¬I), is defined as the probability that a nonintrusion is not detected as an attack. The false-positive rate, also called false alarm rate, denoted by Pr(A|¬I), is defined as the probability that a non-intrusion is detected as an attack. Note that Pr(A|I) + Pr(¬A|I) = Pr(¬A|¬I) + Pr(A|¬I) = 1. The receiver operating characteristic (ROC) curve reflects the dependence of the true-positive rate Pr(A|I) on the false-positive rate Pr(A|¬I), and therefore can help determine the tradeoff between the true-positive rate and the false-positive rate. When using the preceding security metrics to compare the effectiveness of intrusion detection systems, care must be taken. One issue is the event unit, such as packet vs. flow in the context of network-based intrusion detection [Gu et al. ] . Another issue is the base rate of intrusions, which can lead to misleading results if not adequately treated -a phenomenon known as the base-rate fallacy [Axelsson ] . In order to deal with the base-rate fallacy, one can treat the input to an intrusion detection system as a stream I of 0/1 random variables (0 indicates benign or normal, 1 indicates malicious or abnormal), and treat the output of the intrusion detection system as a stream O of 0/1 random variables (0 indicates no alert or normal, 1 indicates alert or abnormal). Let H(I) and H(O) denote the entropy of I and O, respectively. The mutual information I(I, O) between I and O, namely I(I, O) = H(I) − H(I|O), indicates the amount of uncertainty of I reduced after knowing O. The intrusion detection capability metric is defined as the normalization of I(I, O) with respect to H(I), which reflects the base rate [Gu et al. ] . Intrusion detection may also be measured via the cost metric in the decisiontheoretic framework BIB001 . The cost includes both the operational cost of intrusion detection and the damage caused by false negatives. Cardenas et al. [Cardenas et al. 2006 ] unified these metrics into a single framework of multi-criteria optimization, which allows fair comparisons between intrusion detection systems in different operational environments. We refer to a recent survey BIB002 for more details. The metrics mentioned above are mainly geared towards the practical use of measuring the detection power of each individual detection system and comparing the detection power of two detection systems. When modeling intrusion detection systems as a component in a broader or holistic security model, we may need to define and measure the detection probability metric as the conditional probability that a compromised computer at time t is also detected as compromised at time t, namely Pr(o i (t) = 1|s i (t) = 1). This would require us to study how this probability is dependent upon other factors.
A Survey on Security Metrics <s> Measuring the collective detection power. <s> Attackers continually innovate and craft attacks that penetrate existing defenses. New security product purchasing decisions are key in order to keep organizations as secure as possible. Current information available to inform these decisions is often limited to individual security product detection/blocking rates for some test set of attacks. Actual security performance, however, depends on how a security product performs in the context of an organization’s existing security products. Even a security product that tests well on its own may be completely redundant when deployed into an existing environment. We propose a new metric that measures the total security granted by a combination of security products. Also, this metric makes the computation of the added benefit of an additional security product easy. We take the results of each individual security product parsing a certain data set and then, take the union of the results of all security products deployed at that organization. Our metric is the attacks in this union divided by the total attacks in the data set or, in other words, the total detection rate achieved by the whole system. This metric can be computed using existing evaluation techniques and provides a more accurate overall picture of the security posture of an organization as well as a way to measure the real contribution of a specific security product in the context of other security layers. ∗This material is based on research partially sponsored by the National Science Foundation (NSF) under CCF grant 0950373. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the collective detection power. <s> Antivirus scanners are designed to detect malware and, to a lesser extent, to label detections based on a family association. The labeling provided by AV vendors has many applications such as guiding efforts of disinfection and countermeasures, intelligence gathering, and attack attribution, among others. Furthermore, researchers rely on AV labels to establish a baseline of ground truth to compare their detection and classification algorithms. This is done despite many papers pointing out the subtle problem of relying on AV labels. However, the literature lacks any systematic study on validating the performance of antivirus scanners, and the reliability of those labels or detection. <s> BIB002
This metric has been proposed for measuring the collective effectiveness of intrusion detection systems and anti-malware programs BIB001 BIB002 . Let A denote a set of attacks, D = {d 1 , . . . , d n } denote the set of n defense tools, X d denote the set of attacks that are detected by defense tool d ∈ D. The collective detection power of defense tools D ⊆ {d 1 , . . . , d n } is defined as BIB001 . For malware detection, experiments BIB002 show that the collective use of multiple anti-malware programs still cannot detect all malware infections. For example, one recent estimation shows that antimalware tools are only able to detect 45% of attacks. The practical use of these metrics include the comparison of the collective effectiveness between two combinations of detection tools and the evaluation of the effectiveness of defense-in-depth. The theoretical use of these metrics include the incorporation of them as parameters into security models that aim to characterize the global collective effectiveness of employing a combination of defense tools. Like in the case of relative effectiveness mentioned above, these metrics may need to be measured or estimated with respect to known and unknown attacks, which we may call collectiveness effectiveness against known and unknown attacks. This may also require us to estimate the base rate of unknown attacks.
A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by randomizing the memory location of certain system components. This mechanism is available for both Linux (via PaX ASLR) and OpenBSD. We study the effectiveness of address-space randomization and find that its utility on 32-bit architectures is limited by the number of bits available for address randomization. In particular, we demonstrate a derandomization attack that will convert any standard buffer-overflow exploit into an exploit that works against systems protected by address-space randomization. The resulting exploit is as effective as the original exploit, although it takes a little longer to compromise a target machine: on average 216 seconds to compromise Apache running on a Linux PaX ASLR system. The attack does not require running code on the stack. We also explore various ways of strengthening address-space randomization and point out weaknesses in each. Surprisingly, increasing the frequency of re-randomizations adds at most 1 bit of security. Furthermore, compile-time randomization appears to be more effective than runtime randomization. We conclude that, on 32-bit architectures, the only benefit of PaX-like address-space randomization is a small slowdown in worm propagation speed. The cost of randomization is extra complexity in system support. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> Fine-grained address space layout randomization (ASLR) has recently been proposed as a method of efficiently mitigating runtime attacks. In this paper, we introduce the design and implementation of a framework based on a novel attack strategy, dubbed just-in-time code reuse, that undermines the benefits of fine-grained ASLR. Specifically, we derail the assumptions embodied in fine-grained ASLR by exploiting the ability to repeatedly abuse a memory disclosure to map an application's memory layout on-the-fly, dynamically discover API functions and gadgets, and JIT-compile a target program using those gadgets -- all within a script environment at the time an exploit is launched. We demonstrate the power of our framework by using it in conjunction with a real-world exploit against Internet Explorer, and also provide extensive evaluations that demonstrate the practicality of just-in-time code reuse attacks. Our findings suggest that fine-grained ASLR may not be as promising as first thought. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> The latest effective defense against code reuse attacks is fine-grained, per-process memory randomization. However, such process randomization prevents code sharing since there is no longer any identical code to share between processes. Without shared libraries, however, tremendous memory savings are forfeit. This drawback may hinder the adoption of fine-grained memory randomization. ::: ::: We present Oxymoron, a secure fine-grained memory randomization technique on a per-process level that does not interfere with code sharing. Executables and libraries built with Oxymoron feature 'memory-layout-agnostic code', which runs on a commodity Linux. Our theoretical and practical evaluations show that Oxymoron is the first solution to be secure against just-in-time code reuse attacks and demonstrate that fine-grained memory randomization is feasible without forfeiting the enormous memory savings of shared libraries. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> The idea of automatic software diversity is at least two decades old. The deficiencies of currently deployed defenses and the transition to online software distribution (the "App store" model) for traditional and mobile computers has revived the interest in automatic software diversity. Consequently, the literature on diversity grew by more than two dozen papers since 2008. Diversity offers several unique properties. Unlike other defenses, it introduces uncertainty in the target. Precise knowledge of the target software provides the underpinning for a wide range of attacks. This makes diversity a broad rather than narrowly focused defense mechanism. Second, diversity offers probabilistic protection similar to cryptography-attacks may succeed by chance so implementations must offer high entropy. Finally, the design space of diversifying program transformations is large. As a result, researchers have proposed multiple approaches to software diversity that vary with respect to threat models, security, performance, and practicality. In this paper, we systematically study the state-of-the-art in software diversity and highlight fundamental trade-offs between fully automated approaches. We also point to open areas and unresolved challenges. These include "hybrid solutions", error reporting, patching, and implementation disclosure attacks on diversified software. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> User space memory randomization techniques are an emerging field of cyber defensive technology which attempts to protect computing systems by randomizing the layout of memory. Quantitative metrics are needed to evaluate their effectiveness at securing systems against modern adversaries and to compare between randomization technologies. We introduce Effective Entropy, a measure of entropy in user space memory which quantitatively considers an adversary's ability to leverage low entropy regions of memory via absolute and dynamic intersection connections. Effective Entropy is indicative of adversary workload and enables comparison between different randomization techniques. Using Effective Entropy, we present a comparison of static Address Space Layout Randomization (ASLR), Position Independent Executable (PIE) ASLR, and a theoretical fine grain randomization technique. <s> BIB005 </s> A Survey on Security Metrics <s> Measuring the effectiveness of Address Space Layout Randomization (ASLR) <s> Vulnerabilities that disclose executable memory pages enable a new class of powerful code reuse attacks that build the attack payload at runtime. In this work, we present Heisenbyte, a system to protect against memory disclosure attacks. Central to Heisenbyte is the concept of destructive code reads -- code is garbled right after it is read. Garbling the code after reading it takes away from the attacker her ability to leverage memory disclosure bugs in both static code and dynamically generated just-in-time code. By leveraging existing virtualization support, Heisenbyte's novel use of destructive code reads sidesteps the problem of incomplete binary disassembly in binaries, and extends protection to close-sourced COTS binaries, which are two major limitations of prior solutions against memory disclosure vulnerabilities. Our experiments demonstrate that Heisenbyte can tolerate some degree of imperfect static analysis in disassembled binaries, while effectively thwarting dynamic code reuse exploits in both static and JIT code, at a modest 1.8% average runtime overhead due to virtualization and 16.5% average overhead due to the destructive code reads. <s> BIB006
Code injection was a popular attack that aims to inject some malicious code into a running program and direct the processor to execute it. The attack requires the presence of a memory region that is both executable and writable, which was possible because operating systems used to not distinguish programs and data. The attack can be defeated by deploying Data Execution Prevention (DEP, also known as W⊕X), which ensures that a memory page can be writable or executable at any point in time, but not both. The deployment of DEP made attackers move away from launching code injection attacks to launching code reuse attacks, which craft attack payloads from pieces or "gadgets" of executable code that is already running in the system. In order to launch a code-reuse attack, the attacker needs to know where to look for gadgets. This was possible because the base addresses of code and data (including stack and heap) in the virtual memory used to be fixed. One approach to defending against code reuse attacks is to use ASLR to "blind" the attacker, by randomizing the base addresses (i.e., shuffling the code layout in the memory) such that the attacker cannot find useful gadgets. Coarse-grained ASLR has the vulnerability that the leak or exposure of a single address gives the attacker adequate information to extract all code addresses. Fine-grained ASLR do not suffer from this problem (e.g., page-level randomization BIB003 BIB004 ), but are still susceptible to attacks that craft attack payloads from Just-In-Time (JIT) code BIB002 . This attack can be defeated by destructive code read, namely that the code in executable memory pages is garbled once it is read BIB006 . ASLR can also be enhanced by preventing the leak of code pointers, while rendering leaks of other information (e.g., data pointers) useless for deriving code pointers . There are two metrics for measuring the effectiveness of ASLR. One metric is the entropy of a memory section, because a greater entropy would mean a greater effort in order for an attacker to compromise the system. For example, a brute-force attack can feasibly defeat a low-entropy ASLR on 32-bit platforms BIB001 ]. The related effective entropy metric measures the entropy in a memory section that the attacker cannot circumvent by exploiting the interactions between memory sections BIB005 ]. The two metrics mentioned above indirectly reflect the effectiveness of ASLR. For practical use, we would need to measure the direct security gain offered by the deployment of ASLR and/or the extra effort that is imposed on the attacker in order to circumvent ASLR. Being able to measure the effectiveness of ASLR on individual computers, the resulting metrics could be be incorporated into theoretical cyber security models to characterize their global effectiveness.
A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Alias analysis is a prerequisite for performing most of the common program analyses such as reaching-definitions analysis or live-variables analysis. Landi [1992] recently established that it is impossible to compute statically precise alias information—either may-alias or must-alias—in languages with if statements, loops, dynamic storage, and recursive data structures: more precisely, he showed that the may-alias relation is not recursive, while the must-alias relation is not even recursively enumerable. This article presents simpler proofs of the same results. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Memory corruption bugs in software written in low-level languages like C or C++ are one of the oldest problems in computer security. The lack of safety in these languages allows attackers to alter the program's behavior or take full control over it by hijacking its control flow. This problem has existed for more than 30 years and a vast number of potential solutions have been proposed, yet memory corruption attacks continue to pose a serious threat. Real world exploits show that all currently deployed protections can be defeated. This paper sheds light on the primary reasons for this by describing attacks that succeed on today's systems. We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies. We analyze the reasons why protection mechanisms implementing stricter polices are not deployed. To achieve wide adoption, protection mechanisms must support a multitude of features and must satisfy a host of requirements. Especially important is performance, as experience shows that only solutions whose overhead is in reasonable bounds get deployed. A comparison of different enforceable policies helps designers of new protection mechanisms in finding the balance between effectiveness (security) and efficiency. We identify some open research problems, and provide suggestions on improving the adoption of newer techniques. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> The idea of automatic software diversity is at least two decades old. The deficiencies of currently deployed defenses and the transition to online software distribution (the "App store" model) for traditional and mobile computers has revived the interest in automatic software diversity. Consequently, the literature on diversity grew by more than two dozen papers since 2008. Diversity offers several unique properties. Unlike other defenses, it introduces uncertainty in the target. Precise knowledge of the target software provides the underpinning for a wide range of attacks. This makes diversity a broad rather than narrowly focused defense mechanism. Second, diversity offers probabilistic protection similar to cryptography-attacks may succeed by chance so implementations must offer high entropy. Finally, the design space of diversifying program transformations is large. As a result, researchers have proposed multiple approaches to software diversity that vary with respect to threat models, security, performance, and practicality. In this paper, we systematically study the state-of-the-art in software diversity and highlight fundamental trade-offs between fully automated approaches. We also point to open areas and unresolved challenges. These include "hybrid solutions", error reporting, patching, and implementation disclosure attacks on diversified software. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> As existing defenses like ASLR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents flows of control that were not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Return-oriented programming (ROP) offers a robust attack technique that has, not surprisingly, been extensively used to exploit bugs in modern software programs (e.g., web browsers and PDF readers). ROP attacks require no code injection, and have already been shown to be powerful enough to bypass fine-grained memory randomization (ASLR) defenses. To counter this ingenious attack strategy, several proposals for enforcement of (coarse-grained) control-flow integrity (CFI) have emerged. The key argument put forth by these works is that coarse-grained CFI policies are sufficient to prevent ROP attacks. As this reasoning has gained traction, ideas put forth in these proposals have even been incorporated into coarse-grained CFI defenses in widely adopted tools (e.g., Microsoft's EMET framework). ::: ::: In this paper, we provide the first comprehensive security analysis of various CFI solutions (covering kBouncer, ROPecker, CFI for COTS binaries, ROP-Guard, and Microsoft EMET 4.1). A key contribution is in demonstrating that these techniques can be effectively undermined, even under weak adversarial assumptions. More specifically, we show that with bare minimum assumptions, turing-complete and real-world ROP attacks can still be launched even when the strictest of enforcement policies is in use. To do so, we introduce several new ROP attack primitives, and demonstrate the practicality of our approach by transforming existing real-world exploits into more stealthy attacks that bypass coarse-grained CFI defenses. <s> BIB005 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Return Oriented Programming (ROP) has become the exploitation technique of choice for modern memory-safety vulnerability attacks. Recently, there have been multiple attempts at defenses to prevent ROP attacks. In this paper, we introduce three new attack methods that break many existing ROP defenses. Then we show how to break kBouncer and ROPecker, two recent low-overhead defenses that can be applied to legacy software on existing hardware. We examine several recent ROP attacks seen in the wild and demonstrate that our techniques successfully cloak them so they are not detected by these defenses. Our attacks apply to many CFI-based defenses which we argue are weaker than previously thought. Future defenses will need to take our attacks into account. <s> BIB006 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Constraining dynamic control transfers is a common technique for mitigating software vulnerabilities. This defense has been widely and successfully used to protect return addresses and stack data; hence, current attacks instead typically corrupt vtable and function pointers to subvert a forward edge (an indirect jump or call) in the control-flow graph. Forward edges can be protected using Control-Flow Integrity (CFI) but, to date, CFI implementations have been research prototypes, based on impractical assumptions or ad hoc, heuristic techniques. To be widely adoptable, CFI mechanisms must be integrated into production compilers and be compatible with software-engineering aspects such as incremental compilation and dynamic libraries. ::: ::: This paper presents implementations of fine-grained, forward-edge CFI enforcement and analysis for GCC and LLVM that meet the above requirements. An analysis and evaluation of the security, performance, and resource consumption of these mechanisms applied to the SPEC CPU2006 benchmarks and common benchmarks for the Chromium web browser show the practicality of our approach: these fine-grained CFI mechanisms have significantly lower overhead than recent academic CFI prototypes. Implementing CFI in industrial compiler frameworks has also led to insights into design tradeoffs and practical challenges, such as dynamic loading. <s> BIB007 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI. <s> BIB008 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Control-Flow Integrity (CFI) is a defense which prevents control-flow hijacking attacks. While recent research has shown that coarse-grained CFI does not stop attacks, fine-grained CFI is believed to be secure. ::: ::: We argue that assessing the effectiveness of practical CFI implementations is non-trivial and that common evaluation metrics fail to do so. We then evaluate fullyprecise static CFI -- the most restrictive CFI policy that does not break functionality -- and reveal limitations in its security. Using a generalization of non-control-data attacks which we call Control-Flow Bending (CFB), we show how an attacker can leverage a memory corruption vulnerability to achieve Turing-complete computation on memory using just calls to the standard library. We use this attack technique to evaluate fully-precise static CFI on six real binaries and show that in five out of six cases, powerful attacks are still possible. Our results suggest that CFI may not be a reliable defense against memory corruption vulnerabilities. ::: ::: We further evaluate shadow stacks in combination with CFI and find that their presence for security is necessary: deploying shadow stacks removes arbitrary code execution capabilities of attackers in three of six cases. <s> BIB009 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges. We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead. <s> BIB010 </s> A Survey on Security Metrics <s> Measuring the effectiveness of enforcing Control-Flow Integrity (CFI) <s> Control flow integrity (CFI) has been proposed as an approach to defend against control-hijacking memory corruption attacks. CFI works by assigning tags to indirect branch targets statically and checking them at runtime. Coarse-grained enforcements of CFI that use a small number of tags to improve the performance overhead have been shown to be ineffective. As a result, a number of recent efforts have focused on fine-grained enforcement of CFI as it was originally proposed. In this work, we show that even a fine-grained form of CFI with unlimited number of tags and a shadow stack (to check calls and returns) is ineffective in protecting against malicious attacks. We show that many popular code bases such as Apache and Nginx use coding practices that create flexibility in their intended control flow graph (CFG) even when a strong static analyzer is used to construct the CFG. These flexibilities allow an attacker to gain control of the execution while strictly adhering to a fine-grained CFI. We then construct two proof-of-concept exploits that attack an unlimited tag CFI system with a shadow stack. We also evaluate the difficulties of generating a precise CFG using scalable static analysis for real-world applications. Finally, we perform an analysis on a number of popular applications that highlights the availability of such attacks. <s> BIB011
Despite the employment of defenses such as the aforementioned DEP and ASLR, control-flow hijacking remains to be a big threat BIB002 BIB003 . Enforcing CFI has a great potential in assuring security. The basic idea underlying CFI is to extract a program's Control-Flow Graph (CFG), typically from its source code via static analysis, and then instrument the corresponding binary code to abide by the CFG at runtime. This can be implemented by runtime checking of the tags that were assigned to the indirect branches in the CFG, such as indirect calls, indirect jumps, and returns. Since it is expensive to enforce CFI according to the entire CFG [Abadi et al. ] , practical solutions enforce weaker restrictions via a limited number of tags. It was known that coarse-grained enforcement of CFI uses a small number of tags and can be compromised by code reuse attacks BIB004 BIB005 BIB006 . This inadequacy led to fine-grained CFI, such as the enforcement of forward-edge control integrity (i.e. indirect calls but not returns) BIB007 , and the use of message authentication code to prevents unintended control transfers in the CFG BIB008 . Even a fully accurate, fine-grained CFG can be compromised by the control flow bending attack BIB009 , which however can be mitigated by per-input CFI BIB010 . How should we measure the power of CFI? First, the power of CFI is fundamentally limited by the accuracy of the CFGs BIB011 . Because CFGs are computed via static analysis, their accuracy depends on sound and complete pointer analysis, which is undecidable in general BIB001 ]. The trade-off of using an unsound pointer analysis is that all of the due connections may not be reported and therefore can cause false-positives. The trade-off of using an incomplete pointer analysis is that excessive connections may be reported (i.e., over-approximation), which can be exploited to run arbitrary code despite the enforced fine-grained CFI BIB011 . Second, we need to measure the resilience of a CFI scheme against control flow bending attacks, which ideally reflects the effort (or premises) that an attacker must make (or satisfy) in order to evade the CFI scheme. This metric would allow us to compare the resilience of two CFI schemes. Third, it would be ideal if we can measure the power of CFI via the classes of attacks that it can defeat. The key issue here is to have a formalism by which we can precisely classify attacks. The challenge is that there could be infinitely many attacks and it is not clear what would be the right formalism.
A Survey on Security Metrics <s> Measuring the threat landscape <s> For many years, online criminals have been able to conduct their illicit activities by masquerading behind disreputable Internet Service Providers (ISPs). For example, organizations such as the Russian Business Network (RBN), Atrivo (a.k.a., Intercage), McColo, and most recently, the Triple Fiber Network (3FN) operated with impunity, providing a safe haven for Internet criminals for their own financial gain. What primarily sets these ISPs apart from others is the significant longevity of the malicious activities on their networks and the apparent lack of action taken in response to abuse reports. Interestingly, even though the Internet provides a certain degree of anonymity, such ISPs fear public attention. Once exposed, rogue networks often cease their malicious activities quickly, or are de-peered (disconnected) by their upstream providers. As a result, the Internet criminals are forced to relocate their operations. In this paper, we present FIRE, a novel system to identify and expose organizations and ISPs that demonstrate persistent, malicious behavior. The goal is to isolate the networks that are consistently implicated in malicious activity from those that are victims of compromise. To this end, FIRE actively monitors botnet communication channels, drive-by-download servers, and phishing web sites. This data is refined and correlated to quantify the degree of malicious activity for individual organizations. We present our results in real-time via the website maliciousnetworks.org. These results can be used to pinpoint and to track the activity of rogue organizations, preventing criminals from establishing strongholds on the Internet. Also, the information can be compiled into a null-routing blacklist to immediately halt traffic from malicious networks. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the threat landscape <s> Criminal activities in cyberspace are increasingly facilitated by black markets. This report characterizes these markets and describes how they have evolved, to provide insight into how their existence can harm the information security environment. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring the threat landscape <s> In this paper, we systematically explore the widely held, anecdotal belief that mismanaged networks are responsible for a wide range of security incidents. Utilizing Internet-scale measurements of DNS resolvers, BGP routers, and SMTP, HTTP, and DNS-name servers, we find there are thousands of networks where a large fraction of network services are misconfigured. Combining global feeds of malicious activities including spam, phishing, malware, and scanning, we find a statistically significant correlation between networks that are mismanaged and networks that are responsible for maliciousness. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring the threat landscape <s> Rigorously characterizing the statistical properties of cyber attacks is an important problem. In this paper, we propose the first statistical framework for rigorously analyzing honeypot-captured cyber attack data. The framework is built on the novel concept of stochastic cyber attack process, a new kind of mathematical objects for describing cyber attacks. To demonstrate use of the framework, we apply it to analyze a low-interaction honeypot dataset, while noting that the framework can be equally applied to analyze high-interaction honeypot data that contains richer information about the attacks. The case study finds, for the first time, that long-range dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case study confirms that by exploiting the statistical properties (LRD in this case), it is feasible to predict cyber attacks (at least in terms of attack rate) with good accuracy. This kind of prediction capability would provide sufficient early-warning time for defenders to adjust their defense configurations or resource allocations. The idea of “gray-box” (rather than “black-box”) prediction is central to the utility of the statistical framework, and represents a significant step towards ultimately understanding (the degree of) the predictability of cyber attacks. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring the threat landscape <s> It is important to understand to what extent, and in what perspectives, cyber attacks can be predicted. Despite its evident importance, this problem was not investigated until very recently, when we proposed using the innovative methodology of gray-box prediction. This methodology advocates the use of gray-box models, which accommodate the statistical properties/phenomena exhibited by the data. Specifically, we showed that gray-box models that accommodate the long-range dependence phenomenon can predict the attack rate (i.e., the number of attacks per unit time) 1-h ahead-of-time with an accuracy of 70.2%–82.1%. To the best of our knowledge, this is the first result showing the feasibility of prediction in this domain. We observe that the prediction errors are partly caused by the models’ incapability in predicting the large attack rates, which are called extreme values in statistics. This motivates us to analyze the extreme-value phenomenon, using two complementary approaches: 1) the extreme value theory (EVT) and 2) the time series theory (TST). In this paper, we show that EVT can offer long-term predictions (e.g., 24-h ahead-of-time), while gray-box TST models can predict attack rates 1-h ahead-of-time with an accuracy of 86%–87.9%. We explore connections between the two approaches, and point out future research directions. Although our prediction study is based on specific cyber attack data, our methodology can be equally applied to analyze any cyber attack data of its kind. <s> BIB005
The threat landscape can be characterized via multiple attributes. One attribute is the attack vector. The number of exploit kits metric describes the number of automated attack tools that are available in the black market BIB002 . This metric can be extended to accommodate, for example, the vulnerabilities that the exploits are geared for. This is a good indicator of cyber threats because most attacks would be launched from these exploit kits [Nayak et al. ; Allodi ] . The network maliciousness metric BIB003 ] measures the fraction of blacklisted IP addresses in a network. The study BIB003 shows that there were 350 autonomous systems which had at least 50% of their IP addresses blacklisted. Moreover, there was a correlation between mismanaged networks and malicious networks, where "mismanaged networks" are those networks that do not follow accepted policies/guidelines. The related rogue network metric measures the population of networks that were abused to launch drive-by download or phishing attacks BIB001 ]. The ISP badness metric quantifies the effect of spam from one ISP or Autonomous System (AS) on the rest of the Internet ]. The control-plane reputation metric quantifies the maliciousness of attacker-owned (i.e., rather than legitimate but mismanaged/abused) ASs based on their control plane information (e.g., routing behavior), which can achieve an early-detection time of 50-60 days (before these malicious ASs are noticed by other defense means) [Konte et al. ] . Malicious, rogue, and bad networks, once detected, can be filtered by enterprise systems via blacklisting. The cybersecurity posture metric measures the dynamic threat imposed by the attacking computers . It may include the attacks observed at honeypots, network telescopes, and/or production enterprise systems. One related metric, the sweep-time, measures the time it takes for each computer or IP address in a target enterprise system to be scanned or attacked at least once . Another related attack rate metric measures the number of attacks that arrive at a system of interest per unit time BIB004 BIB005 . These metrics reflect the aggressiveness of cyber attacks. Although the security metrics mentioned above can reflect some aspects of the threat landscape, we might need to define what may be called comprehensive cyber threat posture, which reflects the holistic threat landscape. This metric is useful because the threat landscape could be used as the "base rate" (in the language of intrusion detection systems) that can help fairly compare the overall defense effectiveness of two enterprise systems. It is interesting to investigate how these security metrics should be incorporated into security models as parameters for analyzing, for example, the evolution of the global security state S(t) over time t.
A Survey on Security Metrics <s> Measuring the attack power of botnets. <s> The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. ::: Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. ::: The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. ::: The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the attack power of botnets. <s> We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using these performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In particular, our models show that targeted responses are particularly effective against scale free botnets and efforts to increase the robustness of scale free networks comes at a cost of diminished transitivity. Botmasters do not appear to have any structural solutions to this problem in scale free networks. We also show that random graph botnets (e.g., those using P2P formations) are highly resistant to both random and targeted responses. We evaluate the impact of responses on different topologies using simulation and demonstrate the utility of our proposed metrics by performing novel measurements of a P2P network. Our analysis shows how botnets may be classified according to structure and given rank or priority using our proposed metrics. This may help direct responses and suggests which general remediation strategies are more likely to succeed. <s> BIB002
The threat of botnets can be characterized by several metrics. The first metric is botnet size. It is natural to count the number x of bots belonging to a botnet. It is important to count the number of bots that can be instructed to launch attacks (e.g., distributed denial-of-service attacks) at a point in time t, denoted by y(t). Due to factors such as the diurnal effect, which explains why some bot computers are powered off during night hours at local time zones, y(t) is often much smaller than x . A related metric is the network bandwidth that a botnet can use to launch denial-of-service attacks BIB002 ]. The second metric is botnet efficiency, which can be defined as the network diameter of the botnet network topology BIB002 ]. This metric measures a botnet's capability in communicating command-and-control messages and updating bot programs. The third metric is botnet robustness, which measures the robustness of botnets under random or intelligent disruptions BIB002 ]. There has been a body of literature BIB001 on measuring complex network robustness that can be adopted for characterizing botnets. Although the above metrics measure botnets from some intuitive aspects, it remains elusive to define the intuitive metric of botnet attack power, which is important because it can prioritize the countermeasures against botnets. Moreover, the intuitive botnet resilience metric would need to take into consideration the counter-countermeasures that may be employed by the attacker during the process that the defender launches countermeasures against botnets.
A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> This paper presents the results of an experiment in security evaluation. The system is modeled as a privilege graph that exhibits its security vulnerabilities. Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed. A set of tools has been developed to compute such measures and has been used in an experiment to monitor a large real system for nearly two years. The experimental results are presented and the validity of the measures is discussed. Finally, the practical usefulness of such tools for operational security monitoring is shown and a comparison with other existing approaches is given. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> Even well administered networks are vulnerable to attacks due to the security ramifications of offering a variety of combined services. That is, services that are secure when offered in isolation nonetheless provide an attacker with a vulnerability to exploit when offered simultaneously. Many current tools address vulnerabilities in the context of a single host. We address vulnerabilities due to the configuration of various hosts in a network. In a different line of research, formal methods are often useful for generating test cases, and model checkers are particularly adept at this task due to their ability to generate counterexamples. We address the network vulnerabilities problem with test cases, which amount to attack scenarios, generated by a model checker. We encode the vulnerabilities in a state machine description suitable for a model checker and then assert that an attacker cannot acquire a given privilege on a given host. The model checker either offers assurance that the assertion is true on the actual network or provides a counterexample detailing each step of a successful attack. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> Attack graph analysis has been established as a powerful tool for analyzing network vulnerability. However, previous approaches to network hardening look for exact solutions and thus do not scale. Further, hardening elements have been treated independently, which is inappropriate for real environments. For example, the cost for patching many systems may be nearly the same as for patching a single one. Or patching a vulnerability may have the same effect as blocking traffic with a firewall, while blocking a port may deny legitimate service. By failing to account for such hardening interdependencies, the resulting recommendations can be unrealistic and far from optimal. Instead, we formalize the notion of hardening strategy in terms of allowable actions, and define a cost model that takes into account the impact of interdependent hardening actions. We also introduce a near-optimal approximation algorithm that scales linearly with the size of the graphs, which we validate experimentally. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> Quantifying security risk is an important and yet difficult task in enterprise network security management. While metrics exist for individual software vulnerabilities, there is currently no standard way of aggregating such metrics. We present a model that can be used to aggregate vulnerability metrics in an enterprise network, producing quantitative metrics that measure the likelihood breaches can occur within a given network configuration. A clear semantic model for this aggregation is an important first step toward a comprehensive network security metric model. We utilize existing work in attack graphs and apply probabilistic reasoning to produce an aggregation that has clear semantics and sound computation. We ensure that shared dependencies between attack paths have a proportional effect on the final calculation. We correctly reason over cycles, ensuring that privileges are evaluated without any self-referencing effect. We introduce additional modeling artifacts in our probabilistic graphical model to capture and account for hidden correlations among exploit steps. The paper shows that a clear semantic model for aggregation is critical in interpreting the results, calibrating the metric model, and explaining insights gained from empirical evaluation. Our approach has been rigorously evaluated using a number of network models, as well as data from production systems. <s> BIB005 </s> A Survey on Security Metrics <s> Measuring the power of attacks that exploit multiple vulnerabilities. <s> Discussion of challenges and ways of improving Cyber Situational Awareness dominated our previous chapters. However, we have not yet touched on how to quantify any improvement we might achieve. Indeed, to get an accurate assessment of network security and provide sufficient Cyber Situational Awareness (CSA), simple but meaningful metrics—the focus of the Metrics of Security chapter—are necessary. The adage, “what can’t be measured can’t be effectively managed,” applies here. Without good metrics and the corresponding evaluation methods, security analysts and network operators cannot accurately evaluate and measure the security status of their networks and the success of their operations. In particular, this chapter explores two distinct issues: (i) how to define and use metrics as quantitative characteristics to represent the security state of a network, and (ii) how to define and use metrics to measure CSA from a defender’s point of view. <s> BIB006
Vulnerabilities can be exploited in a chaining fashion. There is a large body of literature in this field, including attack graphs (see, e.g., BIB001 BIB003 BIB004 BIB005 BIB006 ), attack trees , and privilege trees BIB002 . At a high level, these models accommodate system vulnerabilities, vulnerability dependencies (i.e., prerequisites), firewall rules, etc. In these models, the attacker is initially in some security state and attempts to move from the initial state to some goal state, which often corresponds to the compromise of computers. These studies have led to a rich set of metrics, such as the following. The necessary defense metric measures the minimal set of defense countermeasures that must be employed in order to thwart a certain attack . The greater the necessary defense, the more powerful the attack. The weakest adversary metric measures the minimum adversary capabilities that are needed in order to achieve an attack goal [Pamula et al. ] . This metric can be used to compare the power of two attacks with respect to some attack goal(s). For example, one attack has the required weakest adversary capabilities, but the other does not. The existence, number, and lengths of attack paths metrics measure the these attributes of attack paths from an initial state to the goal state BIB003 BIB006 . These metrics can be used to compare two attacks. For example, the attack that has a set X of attack paths is more powerful than another attack that has a set Y of attack paths, where Y ⊂ X. The k-zero-day-safety metric measures the number of zero-day vulnerabilities that are needed in order for an attacker to compromise a target [Wang et al. ] . This metric can be used to compare the power of two attacks as follows: An attack that requires k 1 zero-day vulnerabilities in order to compromise a target is more powerful that an attack that requires k 2 zero-day vulnerabilities, where k 1 < k 2 . The effort-to-security-failure metric measures what the attacker needs to do in order to move from an initial set of privileges to the goal set of escalated privileges BIB002 ]. An attack that incurs a smaller effortto-security-failure is more powerful than an attack that requires a greater effort, assuming the efforts are comparable. Although the metrics mentioned are useful, it would be ideal if we can measure what we call multi-stage attack power, which may be able to incorporate all the metrics mentioned above into a single one. This metric could also be incorporated into security models to analyze (e.g.) the evolution of security state S(t) over t. One barrier is to systematically treat unknown vulnerabilities.
A Survey on Security Metrics <s> Measuring the power of evasion against learning-based detection. <s> Many classification tasks, such as spam filtering, intrusion detection, and terrorism detection, are complicated by an adversary who wishes to avoid detection. Previous work on adversarial classification has made the unrealistic assumption that the attacker has perfect knowledge of the classifier [2]. In this paper, we introduce the adversarial classifier reverse engineering (ACRE) learning problem, the task of learning sufficient information about a classifier to construct adversarial attacks. We present efficient algorithms for reverse engineering linear classifiers with either continuous or Boolean features and demonstrate their effectiveness using real data from the domain of spam filtering. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the power of evasion against learning-based detection. <s> In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques. <s> BIB002
Sophisticated attacks can evade the defense system by, for example, manipulating some features that are used in the detection models (e.g., classifiers). This problem is generally known as adversarial machine learning [Dalvi et al. 2004; BIB001 BIB002 Srndic and Laskov 2014] . There is a spectrum of evasion scenarios, which vary in terms of the information the attacker knows about the detection models, such as (i) knowing only the feature set used by the defender, (ii) knowing both the feature set and the training samples used by the defender, and (iii) knowing the feature set, the training samples, and the attack detection model (e.g., classifiers) used by the defender [Šrndic and Laskov 2014; ]. The evaluation of effectiveness is typically based on metrics such as false-positive and false-negative rates as a consequence of applying a certain evasion method. It is ideal if we can measure the evasion capability of attacks. This not only allows us to compare the evasion power of two attacks, but also can possibly be used to compute the damage that can be caused by evasion attacks. Despite the many efforts (cf. [Šrndic and Laskov 2014] for extensive references), this aspect of security is far from understood.
A Survey on Security Metrics <s> Measuring obfuscation sophistication <s> The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring obfuscation sophistication <s> Run-time packers are often used by malware-writers to obfuscate their code and hinder static analysis. The packer problem has been widely studied, and several solutions have been proposed in order to generically unpack protected binaries. Nevertheless, these solutions commonly rely on a number of assumptions that may not necessarily reflect the reality of the packers used in the wild. Moreover, previous solutions fail to provide useful information about the structure of the packer or its complexity. In this paper, we describe a framework for packer analysis and we propose a taxonomy to measure the runtime complexity of packers. We evaluated our dynamic analysis system on two datasets, composed of both off-the-shelf packers and custom packed binaries. Based on the results of our experiments, we present several statistics about the packers complexity and their evolution over time. <s> BIB002
Obfuscation based on tools such as run-time packers have been widely used by malware-writers to defeat static analysis. Despite the numerous studies that have been surveyed elsewhere BIB001 BIB002 , we understand very little about how to quantify the obfuscation capability of malware. Nevertheless, there have been some notable initial efforts. The obfuscation prevalence metric measures the occurrence of obfuscation in malware samples BIB001 . The structural complexity metric measures the runtime complexity of packers in terms of their layers, granularity etc. [ BIB002 . It is ideal if we can measure the obfuscation sophistication of a malware, perhaps in terms of the amount of effort that is necessary for unpacking the packed malware. One practical use is to automatically differentiate the malware samples that must be manually unpacked from those that can be automatically unpacked. One possible theoretical use is to incorporate it as a parameter in a model for analyzing the evolution of security state S(t) over time t.
A Survey on Security Metrics <s> Measuring the evolution of security state <s> System architects need quantitative security metrics to make informed trade-off decisions involving system security. The security metrics need to provide insight on weak points in the system defense, considering characteristics of both the system and its adversaries. To provide such metrics, we formally define the ADversary View Security Evaluation (ADVISE) method. Our approach is to create an executable state-based security model of a system and an adversary that represents how the adversary is likely to attack the system and the results of such an attack. The attack decision function uses information about adversary attack preferences and possible attacks against the system to mimic how the adversary selects the most attractive next attack step. The adversary's decision involves looking ahead some number of attack steps. System architects can use ADVISE to compare the security strength of system architecture variants and analyze the threats posed by different adversaries. We demonstrate the feasibility and benefits of ADVISE using a case study. To produce quantitative model-based security metrics, we have implemented the ADVISE method in a tool that facilitates user input of system and adversary data and automatically generates executable models. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> Understanding the spreading dynamics of computer viruses (worms, attacks) is an important research problem, and has received much attention from the communities of both computer security and statistical physics. However, previous studies have mainly focused on single-virus spreading dynamics. In this paper, we study multivirus spreading dynamics, where multiple viruses attempt to infect computers while possibly combating against each other because, for example, they are controlled by multiple botmasters. Specifically, we propose and analyze a general model (and its two special cases) of multivirus spreading dynamics in arbitrary networks (i.e., we do not make any restriction on network topologies), where the viruses may or may not coreside on computers. Our model offers analytical results for addressing questions such as: What are the sufficient conditions (also known as epidemic thresholds) under which the multiple viruses will die out? What if some viruses can "rob” others? What characteristics does the multivirus epidemic dynamics exhibit when the viruses are (approximately) equally powerful? The analytical results make a fundamental connection between two types of factors: defense capability and network connectivity. This allows us to draw various insights that can be used to guide security defense. <s> BIB002 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> We argue that <i>emergent behavior</i> is inherent to cybersecurity. <s> BIB003 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> Moving Target Defense (MTD) can enhance the resilience of cyber systems against attacks. Although there have been many MTD techniques, there is no systematic understanding and {\em quantitative} characterization of the power of MTD. In this paper, we propose to use a cyber epidemic dynamics approach to characterize the power of MTD. We define and investigate two complementary measures that are applicable when the defender aims to deploy MTD to achieve a certain security goal. One measure emphasizes the maximum portion of time during which the system can afford to stay in an undesired configuration (or posture), without considering the cost of deploying MTD. The other measure emphasizes the minimum cost of deploying MTD, while accommodating that the system has to stay in an undesired configuration (or posture) for a given portion of time. Our analytic studies lead to algorithms for optimally deploying MTD. <s> BIB004 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> The Internet is a man-made complex system under constant attacks (e.g., Advanced Persistent Threats and malwares). It is therefore important to understand the phenomena that can be induced by the interaction between cyber attacks and cyber defenses. In this paper, we explore the rich phenomena that can be exhibited when the defender employs active defense to combat cyber attacks. To the best of our knowledge, this is the first study that shows that {\em active cyber defense dynamics} (or more generally, {\em cybersecurity dynamics}) can exhibit the bifurcation and chaos phenomena. This has profound implications for cyber security measurement and prediction: (i) it is infeasible (or even impossible) to accurately measure and predict cyber security under certain circumstances; (ii) the defender must manipulate the dynamics to avoid such {\em unmanageable situations} in real-life defense operations. <s> BIB005 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> The concept of active cyber defense has been proposed for years. However, there are no mathematical models for characterizing the effectiveness of active cyber defense. In this paper, we fill the void by proposing a novel Markov process model that is native to the interaction between cyber attack and active cyber defense. Unfortunately, the native Markov process model cannot be tackled by the techniques we are aware of. We therefore simplify, via mean-field approximation, the Markov process model as a Dynamic System model that is amenable to analysis. This allows us to derive a set of valuable analytical results that characterize the effectiveness of four types of active cyber defense dynamics. Simulations show that the analytical results are inherent to the native Markov process model, and therefore justify the validity of the Dynamic System model. We also discuss the side-effect of the mean-field approximation and its implications. <s> BIB006 </s> A Survey on Security Metrics <s> Measuring the evolution of security state <s> Studying models of cyber epidemics over arbitrary complex networks can deepen our understanding of cyber security from a whole-system perspective. In this work, we initiate the investigation of cyber epidemic models that accommodate the dependences between the cyber attack events. Due to the notorious difficulty in dealing with such dependences, essentially all existing cyber epidemic models have disregarded them. Specifically, we introduce the idea of copulas into cyber epidemic models for accommodating the dependences between the cyber attack events. We investigate the epidemic equilibrium thresholds as well as the bounds for both equilibrium and nonequilibrium infection probabilities. We further characterize the side effects of disregarding the due dependences between the cyber attack events by showing that the results thereof are unnecessarily restrictive or even incorrect. <s> BIB007
As illustrated in Figures 1-2 , the security state vector of an enterprise system S(t) = (s 1 (t), . . . , s n (t)) and the security state s i (t) of computer c i (t) both dynamically evolve as a outcome of attack-defense interactions. These metrics aim to measure the dynamic security states. However, the measurement process often incurs errors, such as false-positives and false-negatives. As a consequence, the observed state O(t) = (o 1 (t), . . . , o n (t)) is often different from the true state S(t). The fraction of compromised computers is |{i : i ∈ {1, . . . , n} ∧ s i (t) = 1}|/n. It has been shown that under certain circumstances, there can be some fundamental connection between the global security state and a very small number of nodes that can be monitored carefully ]. An alternative metric is the probability a computer is compromised at time t, namely Pr[s i (t) = 1] as illustrated in Figure 1 . This metric has been proposed in some recent studies that aim to quantify the security in enterprise systems (e.g., BIB001 BIB005 BIB006 BIB007 BIB003 BIB004 BIB002 ). These studies represent some early-stage investigations towards modeling security from a holistic perspective. Knowing the dynamic security state can help the defender make the right decision. For example, knowing the probabilities that computers are compromised at time t, namely Pr[s i (t) = 1] for every i, allows the defender to use an appropriate threshold cryptographic mechanism [Desmedt and Frankel ] to tolerate the compromises. However, faithful security models may require to accommodate many, if not all, of the aforementioned metrics as parameters. Moreover, it is important to know S(t) and s i (t) for any t, rather than for t → ∞. These impose many open problems that remain to be investigated ].
A Survey on Security Metrics <s> Measuring temporal characteristics of security incidents. <s> The paper is based on a conceptual framework in which security can be split into two generic types of characteristics, behavioral and preventive. Here, preventive security denotes the system's ability to protect itself from external attacks. One way to describe the preventive security of a system is in terms of its interaction with the alleged attacker, i.e., by describing the intrusion process. To our knowledge, very little is done to model this process in quantitative terms. Therefore, based on empirical data collected from intrusion experiments, we have worked out a hypothesis on typical attacker behavior. The hypothesis suggests that the attacking process can be split into three phases: the learning phase, the standard attack phase, and the innovative attack phase. The probability for successful attacks during the learning and innovative phases is expected to be small, although for different reasons. During the standard attack phase it is expected to be considerably higher. The collected data indicates that the breaches during the standard attack phase are statistically equivalent and that the times between breaches are exponentially distributed. This would actually imply that traditional methods for reliability modeling could be applicable. <s> BIB001 </s> A Survey on Security Metrics <s> Measuring temporal characteristics of security incidents. <s> Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the "mean time (or effort) to security failure" and also compute probabilities of security failure due to violations of different security attributes. <s> BIB002
The temporal characteristic of incidents can be described by the delay in incident detection [for Internet Security 2010] , which measures the time between when an incident occurred and when the incident is discovered. Another metric is the time between incidents [for Internet Security 2010; , which measures the period of time between two incidents. Yet another metric is the time-to-first-compromise metric BIB001 BIB002 , which measures the duration of time between a computer starts to run and the first malware alarm is triggered on the computer, (alarm indicating detection rather than infection). A study based on a dataset of 5,602,097 malware alarms, which correspond to 203,025 malware attacks against 261,757 computers between 10/15/2009 and 8/10/2012, shows that the time-to-first-compromise follows the Pareto distribution ]. These metrics may be used as alternative to the global security state S(t), especially when S(t) is difficult to predict for arbitrary t (rather than for t → ∞). It would be ideal if we can predict the incident occurrence frequency as an approximation of the number of compromised computers at a future time t, namely |{i : i ∈ {1, . . . , n} ∧ s i (t) = 1}|. One should be cautions when using these metrics to compare the security of a system during two different periods of time, because the threats would be different.
A Survey on Security Metrics <s> What should we measure? <s> This paper presents the results of an experiment in security evaluation. The system is modeled as a privilege graph that exhibits its security vulnerabilities. Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed. A set of tools has been developed to compute such measures and has been used in an experiment to monitor a large real system for nearly two years. The experimental results are presented and the validity of the measures is discussed. Finally, the practical usefulness of such tools for operational security monitoring is shown and a comparison with other existing approaches is given. <s> BIB001 </s> A Survey on Security Metrics <s> What should we measure? <s> Even well administered networks are vulnerable to attacks due to the security ramifications of offering a variety of combined services. That is, services that are secure when offered in isolation nonetheless provide an attacker with a vulnerability to exploit when offered simultaneously. Many current tools address vulnerabilities in the context of a single host. We address vulnerabilities due to the configuration of various hosts in a network. In a different line of research, formal methods are often useful for generating test cases, and model checkers are particularly adept at this task due to their ability to generate counterexamples. We address the network vulnerabilities problem with test cases, which amount to attack scenarios, generated by a model checker. We encode the vulnerabilities in a state machine description suitable for a model checker and then assert that an attacker cannot acquire a given privilege on a given host. The model checker either offers assurance that the assertion is true on the actual network or provides a counterexample detailing each step of a successful attack. <s> BIB002 </s> A Survey on Security Metrics <s> What should we measure? <s> We propose a taxonomy of botnet structures, based on their utility to the botmaster. We propose key metrics to measure their utility for various activities (e.g., spam, ddos). Using these performance metrics, we consider the ability of different response techniques to degrade or disrupt botnets. In particular, our models show that targeted responses are particularly effective against scale free botnets and efforts to increase the robustness of scale free networks comes at a cost of diminished transitivity. Botmasters do not appear to have any structural solutions to this problem in scale free networks. We also show that random graph botnets (e.g., those using P2P formations) are highly resistant to both random and targeted responses. We evaluate the impact of responses on different topologies using simulation and demonstrate the utility of our proposed metrics by performing novel measurements of a P2P network. Our analysis shows how botnets may be classified according to structure and given rank or priority using our proposed metrics. This may help direct responses and suggests which general remediation strategies are more likely to succeed. <s> BIB003 </s> A Survey on Security Metrics <s> What should we measure? <s> The authors describe a behavioral theory of the dynamics of insider-threat risks. Drawing on data related to information technology security violations and on a case study created to explain the dynamics observed in that data, the authors constructed a system dynamics model of a theory of the development of insider-threat risks and conducted numerical simulations to explore the parameter and response spaces of the model. By examining several scenarios in which attention to events, increased judging capabilities, better information, and training activities are simulated, the authors theorize about why information technology security effectiveness changes over time. The simulation results argue against the common presumption that increased security comes at the cost of reduced production. <s> BIB004 </s> A Survey on Security Metrics <s> What should we measure? <s> We report on the aftermath of the discovery of a severe vulnerability in the Debian Linux version of OpenSSL. Systems affected by the bug generated predictable random numbers, most importantly public/private keypairs. To study user response to this vulnerability, we collected a novel dataset of daily remote scans of over 50,000 SSL/TLS-enabled Web servers, of which 751 displayed vulnerable certificates. We report three primary results. First, as expected from previous work, we find an extremely slow rate of fixing, with 30% of the hosts vulnerable when we began our survey on day 4 after disclosure still vulnerable almost six months later. However, unlike conventional vulnerabilities, which typically show a short, fast fixing phase, we observe a much flatter curve with fixing extending six months after the announcement. Second, we identify some predictive factors for the rate of upgrading. Third, we find that certificate authorities continued to issue certificates to servers with weak keys long after the vulnerability was disclosed. <s> BIB005 </s> A Survey on Security Metrics <s> What should we measure? <s> In this paper we present the results of a roleplay survey instrument administered to 1001 online survey respondents to study both the relationship between demographics and phishing susceptibility and the effectiveness of several anti-phishing educational materials. Our results suggest that women are more susceptible than men to phishing and participants between the ages of 18 and 25 are more susceptible to phishing than other age groups. We explain these demographic factors through a mediation analysis. Educational materials reduced users' tendency to enter information into phishing webpages by 40% percent; however, some of the educational materials we tested also slightly decreased participants' tendency to click on legitimate links. <s> BIB006 </s> A Survey on Security Metrics <s> What should we measure? <s> In this paper we attempt to determine the effectiveness of using entropy, as defined in NIST SP800-63, as a measurement of the security provided by various password creation policies. This is accomplished by modeling the success rate of current password cracking techniques against real user passwords. These data sets were collected from several different websites, the largest one containing over 32 million passwords. This focus on actual attack methodologies and real user passwords quite possibly makes this one of the largest studies on password security to date. In addition we examine what these results mean for standard password creation policies, such as minimum password length, and character set requirements. <s> BIB007 </s> A Survey on Security Metrics <s> What should we measure? <s> Measurement of software security is a long-standing challenge to the research community. At the same time, practical security metrics and measurements are essential for secure software development. Hence, the need for metrics is more pressing now due to a growing demand for secure software. In this paper, we propose using a software system's attack surface measurement as an indicator of the system's security. We formalize the notion of a system's attack surface and introduce an attack surface metric to measure the attack surface in a systematic manner. Our measurement method is agnostic to a software system's implementation language and is applicable to systems of all sizes; we demonstrate our method by measuring the attack surfaces of small desktop applications and large enterprise systems implemented in C and Java. We conducted three exploratory empirical studies to validate our method. Software developers can mitigate their software's security risk by measuring and reducing their software's attack surfaces. Our attack surface reduction approach complements the software industry's traditional code quality improvement approach for security risk mitigation and is useful in multiple phases of the software development lifecycle. Our collaboration with SAP demonstrates the use of our metric in the software development process. <s> BIB008 </s> A Survey on Security Metrics <s> What should we measure? <s> We report on the largest corpus of user-chosen passwords ever studied, consisting of anonymized password histograms representing almost 70 million Yahoo! users, mitigating privacy concerns while enabling analysis of dozens of subpopulations based on demographic factors and site usage characteristics. This large data set motivates a thorough statistical treatment of estimating guessing difficulty by sampling from a secret distribution. In place of previously used metrics such as Shannon entropy and guessing entropy, which cannot be estimated with any realistically sized sample, we develop partial guessing metrics including a new variant of guesswork parameterized by an attacker's desired success rate. Our new metric is comparatively easy to approximate and directly relevant for security engineering. By comparing password distributions with a uniform distribution which would provide equivalent security against different forms of guessing attack, we estimate that passwords provide fewer than 10 bits of security against an online, trawling attack, and only about 20 bits of security against an optimal offline dictionary attack. We find surprisingly little variation in guessing difficulty; every identifiable group of users generated a comparably weak password distribution. Security motivations such as the registration of a payment card have no greater impact than demographic factors such as age and nationality. Even proactive efforts to nudge users towards better password choices with graphical feedback make little difference. More surprisingly, even seemingly distant language communities choose the same weak passwords and an attacker never gains more than a factor of 2 efficiency gain by switching from the globally optimal dictionary to a population-specific lists. <s> BIB009 </s> A Survey on Security Metrics <s> What should we measure? <s> Text-based passwords remain the dominant authentication method in computer systems, despite significant advancement in attackers' capabilities to perform password cracking. In response to this threat, password composition policies have grown increasingly complex. However, there is insufficient research defining metrics to characterize password strength and using them to evaluate password-composition policies. In this paper, we analyze 12,000 passwords collected under seven composition policies via an online study. We develop an efficient distributed method for calculating how effectively several heuristic password-guessing algorithms guess passwords. Leveraging this method, we investigate (a) the resistance of passwords created under different conditions to guessing, (b) the performance of guessing algorithms under different training sets, (c) the relationship between passwords explicitly created under a given composition policy and other passwords that happen to meet the same requirements, and (d) the relationship between guess ability, as measured with password-cracking algorithms, and entropy estimates. Our findings advance understanding of both password-composition policies and metrics for quantifying password security. <s> BIB010 </s> A Survey on Security Metrics <s> What should we measure? <s> We propose several possible metrics for measuring the strength of an individual password or any other secret drawn from a known, skewed distribution. In contrast to previous ad hoc approaches which rely on textual properties of passwords, we consider the problem without any knowledge of password structure. This enables rating the strength of a password given a large sample distribution without assuming anything about password semantics. We compare the results of our generic metrics against those of the NIST metrics and other previous "entropy-based" metrics for a large password dataset, which suggest over-fitting in previous metrics. <s> BIB011 </s> A Survey on Security Metrics <s> What should we measure? <s> The problem of insider threat is receiving increasing attention both within the computer science community as well as government and industry. This paper starts by presenting a broad, multidisciplinary survey of insider threat capturing contributions from computer scientists, psychologists, criminologists, and security practitioners. Subsequently, we present the behavioral analysis of insider threat ( $ \textsf{BAIT}$ ) framework, in which we conduct a detailed experiment involving 795 subjects on Amazon Mechanical Turk (AMT) in order to gauge the behaviors that real human subjects follow when attempting to exfiltrate data from within an organization. In the real world, the number of actual insiders found is very small, so supervised machine-learning methods encounter a challenge. Unlike past works, we develop bootstrapping algorithms that learn from highly imbalanced data, mostly unlabeled, and almost no history of user behavior from an insider threat perspective. We develop and evaluate seven algorithms using $ \textsf{BAIT}$ and show that they can produce a realistic (and acceptable) balance of precision and recall. <s> BIB012 </s> A Survey on Security Metrics <s> What should we measure? <s> Infrastructure as a Service (IaaS) cloud has been attracting more and more customers as it provides the highest level of flexibility by offering configurable virtual machines (VMs) and computing infrastructures. Public VM images are usually available for customers to customize and launch. However, the 1 to N mapping between VM images and running instances in IaaS makes vulnerabilities propagate rapidly across the entire public cloud. Besides, IaaS cloud naturally comes with a larger and more stable attack surface and more concentrated target resources than traditional surroundings. In this paper, we first identify the threat of exploiting prevalent vulnerabilities over public IaaS cloud with an empirical study in Amazon EC2. We find that attackers can compromise a considerable number of VMs with trivial cost. We then do a qualitative cost-effectiveness analysis of this threat. Our main result is a two-fold observation: in IaaS cloud, exploiting prevalent vulnerabilities is much more cost-effective than traditional in-house computing environment, therefore attackers have stronger incentive; Fortunately, on the other hand, cloud defenders (cloud providers and customers) also have much lower cost-loss ratio than in traditional environment, therefore they can be more effective for defending attacks. We then build a game-theoretic model and conduct a risk-gain analysis to compare exploiting and patching strategies under cloud and traditional computing environments. Our modeling indicates that under cloud environment, both attack and defense become less cost-effective as time goes by, and the earlier actioner can be more rewarding. We propose countermeasures against such threat in order to bridge the gap between current security situation and defending mechanisms. To our best knowledge, we are the first to analyze and model the threat with prevalent known-vulnerabilities in public cloud. <s> BIB013 </s> A Survey on Security Metrics <s> What should we measure? <s> The Heartbleed vulnerability took the Internet by surprise in April 2014. The vulnerability, one of the most consequential since the advent of the commercial Internet, allowed attackers to remotely read protected memory from an estimated 24--55% of popular HTTPS sites. In this work, we perform a comprehensive, measurement-based analysis of the vulnerability's impact, including (1) tracking the vulnerable population, (2) monitoring patching behavior over time, (3) assessing the impact on the HTTPS certificate ecosystem, and (4) exposing real attacks that attempted to exploit the bug. Furthermore, we conduct a large-scale vulnerability notification experiment involving 150,000 hosts and observe a nearly 50% increase in patching by notified hosts. Drawing upon these analyses, we discuss what went well and what went poorly, in an effort to understand how the technical community can respond more effectively to such events in the future. <s> BIB014 </s> A Survey on Security Metrics <s> What should we measure? <s> Targeted attacks on civil society and nongovernmental organizations have gone underreported despite the fact that these organizations have been shown to be frequent targets of these attacks. In this paper, we shed light on targeted malware attacks faced by these organizations by studying malicious e-mails received by 10 civil society organizations (the majority of which are from groups related to China and Tibet issues) over a period of 4 years. ::: ::: Our study highlights important properties of malware threats faced by these organizations with implications on how these organizations defend themselves and how we quantify these threats. We find that the technical sophistication of malware we observe is fairly low, with more effort placed on socially engineering the e-mail content. Based on this observation, we develop the Targeted Threat Index (TTI), a metric which incorporates both social engineering and technical sophistication when assessing the risk of malware threats. We demonstrate that this metric is more effective than simple technical sophistication for identifying malware threats with the highest potential to successfully compromise victims. We also discuss how education efforts focused on changing user behaviour can help prevent compromise. For two of the three Tibetan groups in our study simple steps such as avoiding the use of email attachments could cut document-based malware threats delivered through e-mail that we observed by up to 95%. <s> BIB015 </s> A Survey on Security Metrics <s> What should we measure? <s> Discussion of challenges and ways of improving Cyber Situational Awareness dominated our previous chapters. However, we have not yet touched on how to quantify any improvement we might achieve. Indeed, to get an accurate assessment of network security and provide sufficient Cyber Situational Awareness (CSA), simple but meaningful metrics—the focus of the Metrics of Security chapter—are necessary. The adage, “what can’t be measured can’t be effectively managed,” applies here. Without good metrics and the corresponding evaluation methods, security analysts and network operators cannot accurately evaluate and measure the security status of their networks and the success of their operations. In particular, this chapter explores two distinct issues: (i) how to define and use metrics as quantitative characteristics to represent the security state of a network, and (ii) how to define and use metrics to measure CSA from a defender’s point of view. <s> BIB016 </s> A Survey on Security Metrics <s> What should we measure? <s> Parameterized password guessability--how many guesses a particular cracking algorithm with particular training data would take to guess a password--has become a common metric of password security. Unlike statistical metrics, it aims to model real-world attackers and to provide per-password strength estimates. We investigate how cracking approaches often used by researchers compare to real-world cracking by professionals, as well as how the choice of approach biases research conclusions. ::: ::: We find that semi-automated cracking by professionals outperforms popular fully automated approaches, but can be approximated by combining multiple such approaches. These approaches are only effective, however, with careful configuration and tuning; in commonly used default configurations, they underestimate the real-world guessability of passwords. We find that analyses of large password sets are often robust to the algorithm used for guessing as long as it is configured effectively. However, cracking algorithms differ systematically in their effectiveness guessing passwords with certain common features (e.g., character substitutions). This has important implications for analyzing the security of specific password characteristics or of individual passwords (e.g., in a password meter or security audit). Our results highlight the danger of relying only on a single cracking algorithm as a measure of password strength and constitute the first scientific evidence that automated guessing can often approximate guessing by professionals. <s> BIB017 </s> A Survey on Security Metrics <s> What should we measure? <s> In recent years, the number of software vulnerabilities discovered has grown significantly. This creates a need for prioritizing the response to new disclosures by assessing which vulnerabilities are likely to be exploited and by quickly ruling out the vulnerabilities that are not actually exploited in the real world. We conduct a quantitative and qualitative exploration of the vulnerability-related information disseminated on Twitter. We then describe the design of a Twitter-based exploit detector, and we introduce a threat model specific to our problem. In addition to response prioritization, our detection techniques have applications in risk modeling for cyber-insurance and they highlight the value of information provided by the victims of attacks. <s> BIB018 </s> A Survey on Security Metrics <s> What should we measure? <s> Vulnerability exploits remain an important mechanism for malware delivery, despite efforts to speed up the creation of patches and improvements in software updating mechanisms. Vulnerabilities in client applications (e.g., Browsers, multimedia players, document readers and editors) are often exploited in spear phishing attacks and are difficult to characterize using network vulnerability scanners. Analyzing their lifecycle requires observing the deployment of patches on hosts around the world. Using data collected over 5 years on 8.4 million hosts, available through Symantec's WINE platform, we present the first systematic study of patch deployment in client-side vulnerabilities. We analyze the patch deployment process of 1,593 vulnerabilities from 10 popular client applications, and we identify several new threats presented by multiple installations of the same program and by shared libraries distributed with several applications. For the 80 vulnerabilities in our dataset that affect code shared by two applications, the time between patch releases in the different applications is up to 118 days (with a median of 11 days). Furthermore, as the patching rates differ considerably among applications, many hosts patch the vulnerability in one application but not in the other one. We demonstrate two novel attacks that enable exploitation by invoking old versions of applications that are used infrequently, but remain installed. We also find that the median fraction of vulnerable hosts patched when exploits are released is at most 14%. Finally, we show that the patching rate is affected by user-specific and application-specific factors, for example, hosts belonging to security analysts and applications with an automated updating mechanism have significantly lower median times to patch. <s> BIB019 </s> A Survey on Security Metrics <s> What should we measure? <s> Rigorously characterizing the statistical properties of cyber attacks is an important problem. In this paper, we propose the first statistical framework for rigorously analyzing honeypot-captured cyber attack data. The framework is built on the novel concept of stochastic cyber attack process, a new kind of mathematical objects for describing cyber attacks. To demonstrate use of the framework, we apply it to analyze a low-interaction honeypot dataset, while noting that the framework can be equally applied to analyze high-interaction honeypot data that contains richer information about the attacks. The case study finds, for the first time, that long-range dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case study confirms that by exploiting the statistical properties (LRD in this case), it is feasible to predict cyber attacks (at least in terms of attack rate) with good accuracy. This kind of prediction capability would provide sufficient early-warning time for defenders to adjust their defense configurations or resource allocations. The idea of “gray-box” (rather than “black-box”) prediction is central to the utility of the statistical framework, and represents a significant step towards ultimately understanding (the degree of) the predictability of cyber attacks. <s> BIB020 </s> A Survey on Security Metrics <s> What should we measure? <s> It is important to understand to what extent, and in what perspectives, cyber attacks can be predicted. Despite its evident importance, this problem was not investigated until very recently, when we proposed using the innovative methodology of gray-box prediction. This methodology advocates the use of gray-box models, which accommodate the statistical properties/phenomena exhibited by the data. Specifically, we showed that gray-box models that accommodate the long-range dependence phenomenon can predict the attack rate (i.e., the number of attacks per unit time) 1-h ahead-of-time with an accuracy of 70.2%–82.1%. To the best of our knowledge, this is the first result showing the feasibility of prediction in this domain. We observe that the prediction errors are partly caused by the models’ incapability in predicting the large attack rates, which are called extreme values in statistics. This motivates us to analyze the extreme-value phenomenon, using two complementary approaches: 1) the extreme value theory (EVT) and 2) the time series theory (TST). In this paper, we show that EVT can offer long-term predictions (e.g., 24-h ahead-of-time), while gray-box TST models can predict attack rates 1-h ahead-of-time with an accuracy of 86%–87.9%. We explore connections between the two approaches, and point out future research directions. Although our prediction study is based on specific cyber attack data, our methodology can be equally applied to analyze any cyber attack data of its kind. <s> BIB021
First, Table 7 shows that there are big gaps between the state-of-the-art metrics (i.e., second column) and the desirable metrics (i.e., third column). For example, we used the thickness of blue bars and red arrows in Figures 1 and 2 to reflect the defense power and attack power, respectively. However, the existing security metrics do not measure this intuitive thickness metric. This resonates Pfleeger's observation [Pfleeger 2009 ] that metrics in the literature often correspond to "what can be easily measured," rather than "what need to be measured"-a fundamental problem that is largely open. In what follows we discuss the 4 categories systemized above and highlight what kinds of research are needed. For measuring system vulnerabilities, we considered metrics for measuring users' vulnerabilities, password vulnerabilities, system interface-induced vulnerabilities, software vulnerabilities, and cryptographic key vulnerabilities. These classes of metrics appear to be complete. For example, the problem of insider threats could be treated by some users' vulnerability metrics, such as user's susceptibility to insider threats. (The survey did not include security metrics for measuring insider threats, simply because there are no well-defined metrics of this kind despite the efforts BIB012 BIB004 .) However, it is not clear what kind of formalism would be sufficient for reasoning about the completeness of metrics. A related open problem is: How can we define a metric that may be called overall vulnerability of an enterprise or computer system, which reflects the systems' overall susceptibility to attacks? For measuring defense power, we considered metrics for measuring the effectiveness of blacklisting, the attack detection power, the effectiveness of ASLR, the effectiveness of assuring CFI, and the overall defense power. It is ideal that the overall defense power metric can accommodate the other kinds of metrics. An important open problem Table I . Summary of the taxonomy, the representative examples of security metrics systemized in the present paper, and the desirable metrics that are discussed in the text ( † indicates that the desirable metric is little understood)."(Avoidable via prudential engineering)" means the attacks in question can be avoided by prudential engineering. Measuring what? representatives of metrics systemized in the paper desirable security metrics Measuring system vulnerabilities users' vulnerabilities user's susceptibility to phishing attacks BIB006 ], user's susceptibility to malware infection [Lalonde Levesque et al. ] user's susceptibility to class(es) of attacks (e.g., social-engineering) † password vulnerabilities parameterized/statistical password guessability BIB007 BIB009 BIB010 BIB017 BIB011 ], password entropy worst-case and average-case parameterized password guessability system interface attack surface BIB008 , exercised attack-surface [Nayak et al. ] interface-induced susceptibility † software vulnerabilities vulnerability spatial characteristics unpatched vulnerabilities for Internet Security 2010] , exploited vulnerabilities [Nayak et al. ; Allodi ] , vulnerability prevalence BIB013 vulnerability situation awareness † vulnerability temporal characteristics historical (exploited) vulnerability , future (exploited) vulnerability , tendency-to-be-exploited BIB018 , vulnerability lifetime [Frei and Kristensen 2010; BIB019 BIB005 BIB014 BIB013 , control-plan reputation [Konte et al. ] , earlydetection time [Konte et al. ] , cybersecurity posture , sweep-time ], attack rate BIB020 BIB021 comprehensive cyber threat posture † zero-day attacks number of zero-day attacks [Corporation 2012 ], lifetime of zero-day attacks [Bilge and Dumitras ] , number of zero-day attack victims [Bilge and Dumitras ] susceptibility of a computer to zero-day attacks † attack power power of targeted attacks targeted threat index BIB015 susceptibility to targeted attacks † power of botnet botnet size , botnet efficiency BIB003 ], botnet robustness BIB003 ] botnet attack power † , botnet resilience with counter-countermeasures † power of malware spreading infection rate [Chen and Ji ] attack power † , wasted scans † power of multi-stage attacks necessary defense , weakest adversary [Pamula et al. ] , attack paths BIB002 BIB016 ], k-zero-day-safety [Wang et al. ] , effort-to-security-failure BIB001 is: Can the study of the overall defense power metric help determine the completeness of the other classes of defense power metrics? For example, is there a formalism by which we can rigorously show that the overall defense power metric can or cannot be derived from the other kinds of metrics? For measuring threats, we considered metrics for measuring threat landscape, zeroday attacks, attack power, and obfuscation sophistication. The question is: How can we rigorously show that these metrics collectively reflect the intuitive metric that may be called overall attack power? For measuring situations, we considered metrics for measuring the global security state S(t) = (s 1 (t), . . . , s n (t)), security incidents, and security investments. These metrics appear to be complete because security state S(t) itself does not reflect the damage of security incidents, which may or may not be positively correlated to security investments. Nevertheless, it may be helpful to integrate these metrics and the metrics for measuring system vulnerabilities and defense power into a single category, which may more comprehensively reflect the situations. This is because, for example, a user's susceptibility to attacks may vary with time t. An important open problem is: How can the defense power metrics and the attack power metrics be unified into a single framework such that, for example, a single algorithm would allow us to compute the outcome (or consequence) of the interaction between an attacker of a certain attack power and a defender of a certain defense power? This would formalize the intuitive representation of attack power and defense power as highlighted in Table 7 (third column), namely the thickness of blue bars and red arrows in Figures 1 and 2 . Resolving this problem would immediately lead to a formal treatment of the arms race between the attacker and the defender, as exemplified by the discussion on the effectiveness of ASLR (Section 4.3) and CFI (Section ). Second, what would be the complete set of security metrics from which any useful security metric can be derived? The concept of completeness not only applies to the categories of security metrics, but also applies to the security metrics within each category. In order to shed light on this fundamental problem, let us look at the example of healthcare. In order to determine a person's health state, various kinds of blood tests are conducted to measure things such as glucose. The tests are subsequential to the medical research that discovered (for example) that glucose is reflective of a certain aspect of human body's health state, which answers the what to measure question. This example highlights that more research is needed to understand what security metric is reflective of which security attribute or property; otherwise, our understanding of security metrics will remain heuristic. Security metrics are difficult to measure in practice. For example, vulnerabilities are dynamically discovered and the attacker may identify some zero-day vulnerabilities that are not known to the defender; the defender does not know for certain what exploits the attack possess; there may be some attack incidents that are never detected by the defender. These indicate that uncertainty is inherent to the threat model the defender is confronted with. As a consequence, security metrics should often be treated as random variables, rather than numbers. This means that we should strive to characterize the distributions of the random variables representing security metrics, rather than the means of random variables only. Another uncertainty is caused by the measurement error, such as S(t) = O(t) as illustrated in Figure 1 . This further highlights the importance of treating security metrics via random variables.
A Survey on Security Metrics <s> What are the desirable properties of security metrics? <s> The Definitive Guide to Quantifying, Classifying, and Measuring Enterprise IT Security OperationsSecurity Metrics is the first comprehensive best-practice guide to defining, creating, and utilizing security metrics in the enterprise.Using sample charts, graphics, case studies, and war stories, Yankee Group Security Expert Andrew Jaquith demonstrates exactly how to establish effective metrics based on your organization's unique requirements. You'll discover how to quantify hard-to-measure security activities, compile and analyze all relevant data, identify strengths and weaknesses, set cost-effective priorities for improvement, and craft compelling messages for senior management.Security Metrics successfully bridges management's quantitative viewpoint with the nuts-and-bolts approach typically taken by security professionals. It brings together expert solutions drawn from Jaquith's extensive consulting work in the software, aerospace, and financial services industries, including new metrics presented nowhere else. You'll learn how to:· Replace nonstop crisis response with a systematic approach to security improvement· Understand the differences between “good” and “bad” metrics· Measure coverage and control, vulnerability management, password quality, patch latency, benchmark scoring, and business-adjusted risk· Quantify the effectiveness of security acquisition, implementation, and other program activities· Organize, aggregate, and analyze your data to bring out key insights· Use visualization to understand and communicate security issues more clearly· Capture valuable data from firewalls and antivirus logs, third-party auditor reports, and other resources· Implement balanced scorecards that present compact, holistic views of organizational security effectivenessWhether you're an engineer or consultant responsible for security and reporting to managementi??or an executive who needs better information for decision-makingi??Security Metrics is the resource you have been searching for.Andrew Jaquith, program manager for Yankee Group's Security Solutions and Services Decision Service, advises enterprise clients on prioritizing and managing security resources. He also helps security vendors develop product, service, and go-to-market strategies for reaching enterprise customers. He co-founded @stake, Inc., a security consulting pioneer acquired by Symantec Corporation in 2004. His application security and metrics research has been featured in CIO, CSO, InformationWeek, IEEE Security and Privacy, and The Economist. Forewordi? i? i? i? i? i? i? i? i? Prefacei? i? i? i? i? i? i? i? i? i? i? i? Acknowledgmentsi? i? i? i? i? i? i? i? i? About the Authori? i? i? i? i? i? i? i? i? i? i? Chapter 1 i? i? i? i? i? i? i? i? Introduction: Escaping the Hamster Wheel of Paini? i? i? i? i? i? i? i? i? i? Chapter 2 i? i? i? i? i? i? i? i? Defining Security Metricsi? i? i? i? i? i? i? i? i? i? i? Chapter 3 i? i? i? i? i? i? i? i? Diagnosing Problems and Measuring Technical Securityi? i? Chapter 4 i? i? i? i? i? i? i? i? Measuring Program Effectivenessi? i? i? i? i? i? i? i? i? i? i? Chapter 5 i? i? i? i? i? i? i? i? Analysis Techniquesi? i? i? i? i? Chapter 6 i? i? i? i? i? i? i? i? Visualizationi? i? i? i? i? Chapter 7 i? i? i? i? i? i? i? i? Automating Metrics CalculationsChapter 8 i? i? i? i? i? i? i? i? Designing Security Scorecardsi? i? Indexi? i? i? <s> BIB001 </s> A Survey on Security Metrics <s> What are the desirable properties of security metrics? <s> Abstract : The goal of this work is to introduce meaningful security metrics that motivate effective improvements in network security. We present a methodology for directly deriving security metrics from realistic mathematical models of adversarial behaviors and systems and also a maturity model to guide the adoption and use of these metrics. Four security metrics are described that assess the risk from prevalent network threats. These can be computed automatically and continuously on a network to assess the effectiveness of controls. Each new metric directly assesses the effect of controls that mitigate vulnerabilities, continuously estimates the risk from one adversary, and provides direct insight into what changes must be made to improve security. Details of an explicit maturity model are provided for each metric that guide security practitioners through three stages where they (1) develop foundational understanding, tools and procedures, (2) make accurate and timely measurements that cover all relevant network components and specify security conditions to test, and (3) perform continuous risk assessments and network improvements. Metrics are designed to address specific threats, maintain practicality and simplicity, and motivate risk reduction. These initial four metrics and additional ones we are developing should be added incrementally to a network to gradually improve overall security as scores drop to acceptable levels and the risks from associated cyber threats are mitigated. <s> BIB002 </s> A Survey on Security Metrics <s> What are the desirable properties of security metrics? <s> We argue that <i>emergent behavior</i> is inherent to cybersecurity. <s> BIB003
There have been proposals for characterizing "good" metrics, such as the following. From a conceptual perspective, a good metric should be easy to understand not only to researchers, but also to defense operators BIB002 . From a measurement perspective, a good metric should be relatively easy to measure, consistently and repeatably BIB001 . From a utility perspective, a good metric should allow both horizontal comparison between enterprise systems, and temporal comparison (e.g., an enterprise system in the present year vs the same enterprise system in the last year) [of State 2010; BIB002 . However, we also need to understand the mathematical properties security metrics should possess. These properties not only can help us differentiate the good metrics from the bad metrics, because for example we can conduct transformations between metrics. These properties may also ease the measurement process. To see this, let us look at the particular property of additivity. The property of additivity can be understood from the following example. When we talk about the measurement of mass, we are actually seeking a mapping mass from A = (Objects, heavier-than, o) to B = (R + ∪ {0}, >, +), where o can be the "putting together" operation and R + is the set of positive reals. Then, mass should satisfy: (i) For two objects a and b, if a heavier-than b, then mass(a) > mass(b). (ii) For any objects a and b, mass(aob) = mass(a)+mass(b). Although the above (i) is relatively easy to achieve when measuring security, the above (ii), namely the additivity property, rarely holds in this domain. However, the additivity is useful because it substantially eases the measurement operations. A partial explanation for the lack of additivity is that security exhibits emergent behavior BIB003 . This suggests to investigate whether there are some additivity-like properties that can help ease the measurement of security.
A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> In this paper we describe the current state of the DARPA packet radio network. Fully automated algorithms and protocols to organize, control, maintain, and move traffic through the packet radio network have been designed, implemented, and tested. By means of protocols, networks of about 50 packet radios with some degree of nodal mobility can be organized and maintained under a fully distributed mode of control. We have described the algorithms and illustrated how the PRNET provides highly reliable network transport and datagram service, by dynamically determining optimal routes, effectively controlling congestion, and fairly allocating the channel in the face of changing link conditions, mobility, and varying traffic loads. <s> BIB001 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> We present a loop-free, distributed routing protocol for mobile packet radio networks. The protocol is intended for use in networks where the rate of topological change is not so fast as to make “flooding” the only possible routing method, but not so slow as to make one of the existing protocols for a nearly-static topology applicable. The routing algorithm adapts asynchronously in a distributed fashion to arbitrary changes in topology in the absence of global topological knowledge. The protocol's uniqueness stems from its ability to maintain source-initiated, loop-free multipath routing only to desired destinations with minimal overhead in a randomly varying topology. The protocol's performance, measured in terms of end-to-end packet delay and throughput, is compared with that of pure flooding and an alternative algorithm which is well-suited to the high-rate topological change environment envisioned here. For each protocol, emphasis is placed on examining how these performance measures vary as a function of the rate of topological changes, network topology, and message traffic level. The results indicate the new protocol generally outperforms the alternative protocol at all rates of change for heavy traffic conditions, whereas the opposite is true for light traffic. Both protocols significantly outperform flooding for all rates of change except at ultra-high rates where all algorithms collapse. The network topology, whether dense or sparsely connected, is not seen to be a major factor in the relative performance of the algorithms. <s> BIB002 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> This paper presents a new, simple and bandwidth-efficient distributed routing protocol for ad-hoc mobile networks. Unlike the conventional distributed routing algorithms, our protocol does not attempt to consistently maintain routing information in every node. In an ad-hoc mobile network where mobile hosts are acting as routers and where routes are made inconsistent by mobile host movements, we employ a new associativity-based routing scheme where a route is selected based on nodes having associativity states that imply periods of stability. In this manner, the routes selected are likely to be long-lived and hence there is no need to restart frequently, resulting in higher attainable throughput. The association property also allows the integration of ad-hoc routing into a BS-oriented wireless LAN environment, providing fault tolerance in times of base station (BS) failures. The protocol is free from loops, deadlock and packet duplicates and has scalable memory requirements. Simulation results obtained reveal that shorter and better routes can be discovered during route re-constructions. <s> BIB003 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term "link reversal" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized "single pass" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a "physical or logical clock" to establish the "temporal order" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA). <s> BIB004 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> Ad hoc networks have no spatial hierarchy and suffer from frequent link failures which prevent mobile hosts from using traditional routing schemes. Under these conditions, mobile hosts must find routes to destinations without the use of designated routers and also must dynamically adapt the routes to the current link conditions. This article proposes a distributed adaptive routing protocol for finding and maintaining stable routes based on signal strength and location stability in an ad hoc network and presents an architecture for its implementation. Interoperability with mobile IP (Internet protocol) is discussed. <s> BIB005 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> We present a new hierarchical routing algorithm that combines the loop-free path-finding algorithm (LPA) with the area-based hierarchical routing scheme first proposed by McQuillan (1974) for distance-vector algorithms. The new algorithm, which we call the hierarchical information path-based routing (HIPR) agorithm, accommodates an arbitrary number of aggregation levels and can be viewed as a distributed version of Dijkstra's algorithm running over a hierarchical graph. The HIPR is verified to be loop-free and correct. Simulations are used to show that the HIPR is much more efficient than the OSPF in terms of speed, communication and processing overhead required to converge to correct routing tables. The HIPR constitutes the basis for future Internet routing protocols that are as simple as RIPv2, but with no looping and better performance than protocols based on link-states. <s> BIB006 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> Shared tree multicast is a well established concept used in several multicast protocols for wireline networks (e.g. core base tree, PIM sparse mode etc). In this paper, we extend the shared tree concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. The main challenge in wireless, mobile networks is the rapidly changing environment. We address this issue in our design by: (a) using "soft state"; (b) assigning different roles to nodes depending on their mobility (two level mobility model); (c) proposing an adaptive scheme which combines shared tree and source tree benefits. A detailed wireless simulation model is used to evaluate the proposed schemes and compare them with source based tree (as opposed to shared tree) multicast. The results show that shared tree protocols have low overhead and are very robust to mobility. <s> BIB007 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing <s> An ad-hoc network is the cooperative engagement of a collection of mobile nodes without the required intervention of any centralized access point or existing infrastructure. We present Ad-hoc On Demand Distance Vector Routing (AODV), a novel algorithm for the operation of such ad-hoc networks. Each mobile host operates as a specialized router, and routes are obtained as needed (i.e., on-demand) with little or no reliance on periodic advertisements. Our new routing algorithm is quite suitable for a dynamic self starting network, as required by users wishing to utilize ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, the demand on the overall bandwidth available to the mobile nodes is substantially less than in those protocols that do necessitate such advertisements. Nevertheless we can still maintain most of the advantages of basic distance vector routing mechanisms. We show that our algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks. We also include an evaluation methodology and simulation results to verify the operation of our algorithm. <s> BIB008
A different approach from table-driven routing is source-initiated on-demand routing. This type of routing creates routes only when desired by the source node. When a node requires a route to a destination, it initiates a route discovery process within the network. This process is completed once a route is found or all possible route permutations have been examined. Once a route has been established, it is maintained by a route maintenance procedure until either the destination becomes inaccessible along every path from the source or until the route is no longer desired. Ad Hoc On-Demand Distance Vector Routing -The Ad Hoc On-Demand Distance Vector (AODV) routing protocol described in BIB008 builds on the DSDV algorithm previously described. AODV is an improvement on DSDV because it typically minimizes the number of required broadcasts by creating routes on a demand basis, as opposed to maintaining a complete list of routes as in the DSDV algorithm. The authors of AODV classify it as a pure on-demand route acquisition system, since nodes that are not on a selected path do not maintain routing information or participate in routing table exchanges BIB008 . When a source node desires to send a message to some destination node and does not already have a valid route to that destination, it initiates a path discovery process to locate the other node. It broadcasts a route request (RREQ) packet to its neighbors, which then forward the request to their neighbors, and so on, until either the destination or an intermediate node with a "fresh enough" route to the destination is located. Figure 3a illustrates the propagation of the broadcast RREQs across the network. AODV utilizes destination sequence numbers to ensure all routes are loop-free and contain the most recent route information. Each node maintains its own sequence number, as well as a broadcast ID. The broadcast ID is incremented for every RREQ the node initiates, and together with the node's IP address, uniquely identifies an RREQ. Along with its own sequence number and the broadcast ID, the source node includes in the RREQ the most recent sequence number it has for the destination. Intermediate nodes can reply to the RREQ only if they have a route to the destination whose corresponding destination sequence number is greater than or equal to that contained in the RREQ. During the process of forwarding the RREQ, intermediate nodes record in their route tables the address of the neighbor from which the first copy of the broadcast packet is received, thereby establishing a reverse path. If additional copies of the same RREQ are later received, these packets are discarded. Once the RREQ reaches the destination or an intermediate node with a fresh enough route, the destination/intermediate node responds by unicasting a route reply (RREP) packet back to the neighbor from which it first received the RREQ (Fig. 3b) . As the RREP is routed back along the reverse path, nodes along this path set up forward route entries in their route tables which point to the node from which the RREP came. These forward route entries indicate the active forward route. Associated with each route entry is a route timer which will cause the deletion of the entry if it is not used within the specified lifetime. Because the RREP is forwarded along the path established by the RREQ, AODV only supports the use of symmetric links. Routes are maintained as follows. If a source node moves, it is able to reinitiate the route discovery protocol to find a new route to the destination. If a node along the route moves, its upstream neighbor notices the move and propagates a link failure notification message (an RREP with infinite metric) to each of its active upstream neighbors to inform them of the erasure of that part of the route BIB008 . These nodes in turn s Figure 3 . AODV route discovery. propagate the link failure notification to their upstream neighbors, and so on until the source node is reached. The source node may then choose to reinitiate route discovery for that destination if a route is still desired. An additional aspect of the protocol is the use of hello messages, periodic local broadcasts by a node to inform each mobile node of other nodes in its neighborhood. Hello messages can be used to maintain the local connectivity of a node. However, the use of hello messages is not required. Nodes listen for retransmission of data packets to ensure that the next hop is still within reach. If such a retransmission is not heard, the node may use any one of a number of techniques, including the reception of hello messages, to determine whether the next hop is within communication range. The hello messages may list the other nodes from which a mobile has heard, thereby yielding greater knowledge of network connectivity. Dynamic Source Routing -The Dynamic Source Routing (DSR) protocol presented in is an on-demand routing protocol that is based on the concept of source routing. Mobile nodes are required to maintain route caches that contain the source routes of which the mobile is aware. Entries in the route cache are continually updated as new routes are learned. The protocol consists of two major phases: route discovery and route maintenance. When a mobile node has a packet to send to some destination, it first consults its route cache to determine whether it already has a route to the destination. If it has an unexpired route to the destination, it will use this route to send the packet. On the other hand, if the node does not have such a route, it initiates route discovery by broadcasting a route request packet. This route request contains the address of the destination, along with the source node's address and a unique identification number. Each node receiving the packet checks whether it knows of a route to the destination. If it does not, it adds its own address to the route record of the packet and then forwards the packet along its outgoing links. To limit the number of route requests propagated on the outgoing links of a node, a mobile only forwards the route request if the request has not yet been seen by the mobile and if the mobile's address does not already appear in the route record. A route reply is generated when the route request reaches either the destination itself, or an intermediate node which contains in its route cache an unexpired route to the destination . By the time the packet reaches either the destination or such an intermediate node, it contains a route record yielding the sequence of hops taken. Figure 4a illustrates the formation of the route record as the route request propagates through the network. If the node generating the route reply is the destination, it places the route record contained in the route request into the route reply. If the responding node is an intermediate node, it will append its cached route to the route record and then generate the route reply. To return the route reply, the responding node must have a route to the initiator. If it has a route to the initiator in its route cache, it may use that route. Otherwise, if symmetric links are supported, the node may reverse the route in the route record. If symmetric links are not supported, the node may initiate its own route discovery and piggyback the route reply on the new route request. Figure 4b shows the transmission of the route reply with its associated route record back to the source node. Route maintenance is accomplished through the use of route error packets and acknowledgments. Route error packets are generated at a node when the data link layer encounters a fatal transmission problem. When a route error packet is received, the hop in error is removed from the node's route cache and all routes containing the hop are truncated at that point. In addition to route error messages, acknowledgments are used to verify the correct operation of the route links. Such acknowledgments include passive acknowledgments, where a mobile is able to hear the next hop forwarding the packet along the route. Temporally Ordered Routing Algorithm -The Temporally Ordered Routing Algorithm (TORA) is a highly adaptive loop-free distributed routing algorithm based on the concept of link reversal BIB004 . TORA is proposed to operate in a highly dynamic mobile networking environment. It is source-initiated and provides multiple routes for any desired source/destination pair. The key design concept of TORA is the localization of control messages to a very small set of nodes near the occurrence of a topological change. To accomplish this, nodes need to maintain routing information about adjacent (one-hop) nodes. The protocol performs three basic functions: • Route creation • Route maintenance • Route erasure During the route creation and maintenance phases, nodes use a "height" metric to establish a directed acyclic graph (DAG) rooted at the destination. Thereafter, links are assigned a direction (upstream or downstream) based on the relative height metric of neighboring nodes, as shown in Fig. 5a . This process of establishing a DAG is similar to the query/reply process proposed in Lightweight Mobile Routing (LMR) BIB002 . In times of node mobility the DAG route is broken, and route maintenance is necessary to reestablish a DAG rooted at the same destination. As shown in Fig. 5b , upon failure of the last downstream link, s Figure 4 . Creation of the route record in DSR. a node generates a new reference level which results in the propagation of that reference level by neighboring nodes, effectively coordinating a structured reaction to the failure. Links are reversed to reflect the change in adapting to the new reference level. This has the same effect as reversing the direction of one or more links when a node has no downstream links. Timing is an important factor for TORA because the "height" metric is dependent on the logical time of a link failure; TORA assumes that all nodes have synchronized clocks (accomplished via an external time source such as the Global Positioning System). TORA's metric is a quintuple comprising five elements, namely: • Logical time of a link failure • The unique ID of the node that defined the new reference level • A reflection indicator bit • A propagation ordering parameter • The unique ID of the node The first three elements collectively represent the reference level. A new reference level is defined each time a node loses its last downstream link due to a link failure. TORA's route erasure phase essentially involves flooding a broadcast clear packet (CLR) throughout the network to erase invalid routes. In TORA there is a potential for oscillations to occur, especially when multiple sets of coordinating nodes are concurrently detecting partitions, erasing routes, and building new routes based on each other. Because TORA uses internodal coordination, its instability problem is similar to the "count-to-infinity" problem in distance-vector routing protocols, except that such oscillations are temporary and route convergence will ultimately occur. Associativity-Based Routing -A totally different approach in mobile routing is proposed in BIB003 . The AssociativityBased Routing (ABR) protocol is free from loops, deadlock, and packet duplicates, and defines a new routing metric for ad hoc mobile networks. This metric is known as the degree of association stability. In ABR, a route is selected based on the degree of association stability of mobile nodes. Each node periodically generates a beacon to signify its existence. When received by neighboring nodes, this beacon causes their associativity tables to be updated. For each beacon received, the associativity tick of the current node with respect to the beaconing node is incremented. Association stability is defined by connection stability of one node with respect to another node over time and space. A high degree of association stability may indicate a low state of node mobility, while a low degree may indicate a high state of node mobility. Associativity ticks are reset when the neighbors of a node or the node itself move out of proximity. A fundamental objective of ABR is to derive longer-lived routes for ad hoc mobile networks. The three phases of ABR are: The route discovery phase is accomplished by a broadcast query and await-reply (BQ-REPLY) cycle. A node desiring a route broadcasts a BQ message in search of mobiles that have a route to the destination. All nodes receiving the query (that are not the destination) append their addresses and their associativity ticks with their neighbors along with QoS information to the query packet. A successor node erases its upstream node neighbors' associativity tick entries and retains only the entry concerned with itself and its upstream node. In this way, each resultant packet arriving at the destination will contain the associativity ticks of the nodes along the route to the destination. The destination is then able to select the best route by examining the associativity ticks along each of the paths. When multiple paths have the same overall degree of association stability, the route with the minimum number of hops is selected. The destination then sends a REPLY packet back to the source along this path. Nodes propagating the REPLY mark their routes as valid. All other routes remain inactive, and the possibility of duplicate packets arriving at the destination is avoided. RRC may consist of partial route discovery, invalid route erasure, valid route updates, and new route discovery, depending on which node(s) along the route move. Movement by the source results in a new BQ-REPLY process, as shown in Fig. 6a . The RN BIB001 message is a route notification used to erase the route entries associated with downstream nodes. When the destination moves, the immediate upstream node erases its route and determines if the node is still reachable by a localized query (LQ[H]) process, where H refers to the hop count from the upstream node to the destination (Fig. 6b) . If When a discovered route is no longer desired, the source node initiates a route delete (RD) broadcast so that all nodes along the route update their routing tables. The RD message is propagated by a full broadcast, as opposed to a directed broadcast, because the source node may not be aware of any route node changes that occurred during RRCs. Signal Stability Routing -Another on-demand protocol is the Signal Stability-Based Adaptive Routing protocol (SSR) presented in BIB005 . Unlike the algorithms described so far, SSR selects routes based on the signal strength between nodes and a node's location stability. This route selection criteria has the effect of choosing routes that have "stronger" connectivity. SSR can be divided into two cooperative protocols: the Dynamic Routing Protocol (DRP) and the Static Routing Protocol (SRP). The DRP is responsible for the maintenance of the Signal Stability Table ( SST) and Routing Table ( RT). The SST records the signal strength of neighboring nodes, which is obtained by periodic beacons from the link layer of each neighboring node. Signal strength may be recorded as either a strong or weak channel. All transmissions are received by, and processed in, the DRP. After updating all appropriate table entries, the DRP passes a received packet to the SRP. The SRP processes packets by passing the packet up the stack if it is the intended receiver or looking up the destination in the RT and then forwarding the packet if it is not. If no entry is found in the RT for the destination, a route-search process is initiated to find a route. Route requests are propagated throughout the network, but are only forwarded to the next hop if they are received over strong channels and have not been previously processed (to prevent looping). The destination chooses the first arriving route-search packet to send back because it is most probable that the packet arrived over the shortest and/or least congested path. The DRP then reverses the selected route and sends a route-reply message back to the initiator. The DRP of the nodes along the path update their RTs accordingly. Route-search packets arriving at the destination have necessarily chosen the path of strongest signal stability, since the packets are dropped at a node if they have arrived over a weak channel. If there is no route-reply message received at the source within a specific timeout period, the source changes the PREF field in the header to indicate that weak channels are acceptable, since these may be the only links over which the packet can be propagated. When a failed link is detected within the network, the intermediate nodes send an error message to the source indicating which channel has failed. The source then initiates another route-search process to find a new path to the destination. The source also sends an erase message to notify all nodes of the broken link. Abbreviations: N = Number of nodes in the network d = Network diameter h = Height of routing tree x = Number of nodes affected by a topological change * While WRP uses flat addressing, it can be used hierarchically BIB006 . ** The protocol itself currently does not support multicast; however, there is a separate protocol described in BIB007 , which runs on top of CGSR and provides multicast capability.
A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Comparisons <s> We present a loop-free, distributed routing protocol for mobile packet radio networks. The protocol is intended for use in networks where the rate of topological change is not so fast as to make “flooding” the only possible routing method, but not so slow as to make one of the existing protocols for a nearly-static topology applicable. The routing algorithm adapts asynchronously in a distributed fashion to arbitrary changes in topology in the absence of global topological knowledge. The protocol's uniqueness stems from its ability to maintain source-initiated, loop-free multipath routing only to desired destinations with minimal overhead in a randomly varying topology. The protocol's performance, measured in terms of end-to-end packet delay and throughput, is compared with that of pure flooding and an alternative algorithm which is well-suited to the high-rate topological change environment envisioned here. For each protocol, emphasis is placed on examining how these performance measures vary as a function of the rate of topological changes, network topology, and message traffic level. The results indicate the new protocol generally outperforms the alternative protocol at all rates of change for heavy traffic conditions, whereas the opposite is true for light traffic. Both protocols significantly outperform flooding for all rates of change except at ultra-high rates where all algorithms collapse. The network topology, whether dense or sparsely connected, is not seen to be a major factor in the relative performance of the algorithms. <s> BIB001 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Comparisons <s> This paper presents a new, simple and bandwidth-efficient distributed routing protocol to support mobile computing in a conference size ad-hoc mobile network environment. Unlike the conventional approaches such as link-state and distance-vector distributed routing algorithms, our protocol does not attempt to consistently maintain routing information in every node. In an ad-hoc mobile network where mobile hosts (MHs) are acting as routers and where routes are made inconsistent by MHs‘ movement, we employ an associativity-based routing scheme where a route is selected based on nodes having associativity states that imply periods of stability. In this manner, the routes selected are likely to be long-lived and hence there is no need to restart frequently, resulting in higher attainable throughput. Route requests are broadcast on a per need basis. The association property also allows the integration of ad-hoc routing into a BS-oriented Wireless LAN (WLAN) environment, providing the fault tolerance in times of base stations (BSs) failures. To discover shorter routes and to shorten the route recovery time when the association property is violated, the localised-query and quick-abort mechanisms are respectively incorporated into the protocol. To further increase cell capacity and lower transmission power requirements, a dynamic cell size adjustment scheme is introduced. The protocol is free from loops, deadlock and packet duplicates and has scalable memory requirements. Simulation results obtained reveal that shorter and better routes can be discovered during route re-constructions. <s> BIB002
The following sections provide comparisons of the previously described routing algorithms. The next section compares table-driven protocols, and another section compares ondemand protocols. A later section presents a discussion of the two classes of algorithms. In Tables 1 and 2 , time complexity is defined as the number of steps needed to perform a protocol operation, and communication complexity is the number of messages needed to perform a protocol operation BIB001 BIB002 . Also, the values for these metrics represent worstcase behavior.
A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing Protocols <s> We present a new distributed routing protocol for mobile, multihop, wireless networks. The protocol is one of a family of protocols which we term "link reversal" algorithms. The protocol's reaction is structured as a temporally-ordered sequence of diffusing computations; each computation consisting of a sequence of directed link reversals. The protocol is highly adaptive, efficient and scalable; being best-suited for use in large, dense, mobile networks. In these networks, the protocol's reaction to link failures typically involves only a localized "single pass" of the distributed algorithm. This capability is unique among protocols which are stable in the face of network partitions, and results in the protocol's high degree of adaptivity. This desirable behavior is achieved through the novel use of a "physical or logical clock" to establish the "temporal order" of topological change events which is used to structure (or order) the algorithm's reaction to topological changes. We refer to the protocol as the temporally-ordered routing algorithm (TORA). <s> BIB001 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Source-Initiated On-Demand Routing Protocols <s> In this paper, we present a multicast protocol which is built upon the temporally-ordered routing algorithm (TORA). The protocol-termed the lightweight adaptive multicast (LAM) routing algorithm-is designed for use in a Mobile Ad hoc NETwork (MANET) and, conceptually, can be thought of as an integration of the CORE based tree (CBT) multicast routing protocol and TORA. The direct coupling of LAM and TORA increases reaction efficiency (lowering protocol control overhead) as the new protocol can benefit from TORA's mechanisms while reacting to topological changes. Also during periods of stable topology and constant group membership, the LAM protocol does not introduce any additional overhead because it does not require timer-based messaging during its execution. <s> BIB002
AODV employs a route discovery procedure similar to DSR; however, there are a couple of important distinctions. The most notable of these is that the overhead of DSR is potentially larger than that of AODV since each DSR packet must carry full routing information, whereas in AODV packets need only contain the destination address. Similarly, the route replies in DSR are larger because they contain the address of every node along the route, whereas in AODV route replies need only carry the destination IP address and sequence number. Also, the memory overhead may be slightly greater in DSR because of the need to remember full routes, as opposed to only next hop information in AODV. A further advantage of AODV is its support for multicast . None of the other algorithms considered in this article currently incorporate multicast communication. On the downside, AODV requires symmetric links between nodes, and hence cannot utilize routes with asymmetric links. In this aspect DSR is superior, since it does not require the use of such links and can utilize asymmetric links when symmetric links are not available. The DSR algorithm is intended for networks in which the mobiles move at moderate speed with respect to packet transmission latency . Assumptions the algorithm makes for operation are that the network diameter is relatively small and that the mobile nodes can enable a promiscuous receive mode, whereby every received packet is delivered to the network driver software without filtering by destination address. An advantage of DSR over some of the other ondemand protocols is that DSR does not make use of periodic routing advertisements, thereby saving bandwidth and reducing power consumption. Hence, the protocol does not incur any overhead when there are no changes in network topology. Additionally, DSR allows nodes to keep multiple routes to a destination in their cache. Hence, when a link on a route is broken, the source node can check its cache for another valid route. If such a route is found, route reconstruction does not need to be reinvoked. In this case, route recovery is faster than in many of the other ondemand protocols. However, if there are no additional routes to the destination in the source node's cache, route discovery must be reinitiated, as in AODV, if the route is still required. On the other hand, because of the small diameter assumption and the source routing requirement, DSR is not scalable to large networks. Furthermore, as previously stated, the need to place the entire route in both route replies and data packets causes greater control overhead than in AODV. TORA is a "link reversal" algorithm that is best suited for networks with large dense populations of nodes BIB001 . Part of the novelty of TORA stems from its creation of DAGs to aid route establishment. One of the advantages of TORA is its support for multiple routes. TORA and DSR are the only on-demand protocols considered here which retain multiple route possibilities for a single source/destination pair. Route reconstruction is not necessary until all known routes to a destination are deemed invalid, and hence bandwidth can potentially be conserved because of the necessity for fewer route rebuildings. Another advantage of TORA is its support for multicast. Although, unlike AODV, TORA does not incorporate multicast into its basic operation, it functions as the underlying protocol for the Lightweight Adaptive Multicast Algorithm (LAM), and together the two protocols provide multicast capability BIB002 . TORA's reliance on synchronized clocks, while a novel idea, inherently limits its applicability. If a node does not have GPS or some other external time source, it cannot use the algorithm. Additionally, if the external time source fails, the algorithm will cease to operate. Furthermore, route rebuilding in TORA may not occur as quickly as in the other algorithms due to the potential for oscillations during this period. This can lead to potentially lengthy delays while waiting for the new routes to be determined. ABR is a compromise between broadcast and point-topoint routing, and uses the connection-oriented packet forwarding approach. Route selection is primarily based on the aggregated associativity ticks of nodes along the path. Hence, although the resulting path does not necessarily result in the smallest possible number of hops, the path tends to be longerlived than other routes. A long-lived route requires fewer route reconstructions and therefore yields higher throughput. Another benefit of ABR is that, like the other protocols, it is guaranteed to be free of packet duplicates. The reason is that only the best route is marked valid, while all other possible routes remain passive. ABR, however, relies on the fact that each node is beaconing periodically. The beaconing interval must be short enough to accurately reflect the spatial, temporal, and connectivity state of the mobile hosts. This beaconing requirement may result in additional power consumption. However, experimental results obtained in reveal that the inclusion of periodic beaconing has a minute influence on the overall battery power consumption. Unlike DSR, ABR does not utilize route caches. The SSR algorithm is a logical descendant of ABR. It utilizes a new technique of selecting routes based on the signal strength and location stability of nodes along the path. As in ABR, while the paths selected by this algorithm are not necessarily shortest in hop count, they do tend to be more stable and longer-lived, resulting in fewer route reconstructions. One of the major drawbacks of the SSR protocol is that, unlike in AODV and DSR, intermediate nodes cannot reply to route requests sent toward a destination; this results in potentially long delays before a route can be discovered. Additionally, when a link failure occurs along a path, the route discovery algorithm must be reinvoked from the source to find a new path to the destination. No attempt is made to use partial route recovery (unlike ABR), that is, to allow intermediate nodes to attempt to rebuild the route themselves. AODV and DSR also do not specify intermediate node rebuilding. While this may lead to longer route reconstruction times since link failures cannot be resolved locally without the intervention of the source node, the attempt and failure of an intermediate node to rebuild a route will cause a longer delay than if the source node had attempted the rebuilding as soon as the broken link was noticed. Thus, it remains to be seen whether intermediate node route rebuilding is more optimal than source node route rebuilding.
A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Applications and Challenges <s> b this paper we present a case for using new power-aware metn.cs for determining routes in wireless ad hoc networks. We present five ~erent metriw based on battery power consumption at nodw. We show that using th=e metrics in a shortest-cost routing algorithm reduces the cost/packet of routing packets by 5-30% over shortwt-hop routing (this cost reduction is on top of a 40-70% reduction in energy consumption obtained by using PAMAS, our MAC layer prtocol). Furthermore, using these new metrics ensures that the mean time to node failure is increased si~cantly. An interesting property of using shortest-cost routing is that packet delays do not increase. Fintiy, we note that our new metrim can be used in most tradition routing protocols for ad hoc networks. <s> BIB001 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Applications and Challenges <s> Tree multicast is a well established concept in wired networks. Two versions, per-source tree multicast (e.g., DVMRP) and shared tree multicast (e.g., Core Based Tree), account for the majority of the wireline implementations. In this paper, we extend the tree multicast concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. The main challenge in wireless, mobile networks is the rapidly changing environment. We address this issue in our design by: (a) using “soft state”; (b) assigning different roles to nodes depending on their mobility (2-level mobility model); (c) proposing an adaptive scheme which combines shared tree and per-source tree benefits, and (d) dynamically relocating the shared tree Rendezvous Point (RP). A detailed wireless simulation model is used to evaluate various multicast schemes. The results show that per-source trees perform better in heavy loads because of the more efficient traffic distribution; while shared trees are more robust to mobility and are more scalable to large network sizes. The adaptive tree multicast scheme, a hybrid between shared tree and per-source tree, combines the advantages of both and performs consistently well across all load and mobility scenarios. The main contributions of this study are: the use of a 2-level mobility model to improve the stability of the shared tree, the development of a hybrid, adaptive per-source and shared tree scheme, and the dynamic relocation of the RP in the shared tree. <s> BIB002 </s> A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks <s> Applications and Challenges <s> A mobile ad hoc network consists of wireless hosts that may move often. Movement of hosts results in a change in routes, requiring some mechanism for determining new routes. Several routing protocols have already been proposed for ad hoc networks. This paper suggests an approach to utilize location information (for instance, obtained using the global positioning system) to improve performance of routing protocols for ad hoc networks. By using location information, the proposed Location‐Aided Routing (LAR) protocols limit the search for a new route to a smaller “request zone” of the ad hoc network. This results in a significant reduction in the number of routing messages. We present two algorithms to determine the request zone, and also suggest potential optimizations to our algorithms. <s> BIB003
Akin to packet radio networks, ad hoc wireless networks have an important role to play in military applications. Soldiers equipped with multimode mobile communicators can now communicate in an ad hoc manner without the need for fixed wireless base stations. In addition, small vehicular devices equipped with audio sensors and cameras can be deployed at targeted regions to collect important location and environmental information which will be communicated back to a processing node via ad hoc mobile communications. Ship-toship ad hoc mobile communication is also desirable since it provides alternate communication paths without reliance on ground-or space-based communication infrastructures. Commercial scenarios for ad hoc wireless networks include: • Conferences/meetings/lectures • Emergency services • Law enforcement People today attend meetings and conferences with their laptops, palmtops, and notebooks. It is therefore attractive to have instant network formation, in addition to file and information sharing without the presence of fixed base stations and systems administrators. A presenter can multicast slides and audio to intended recipients. Attendees can ask questions and interact on a commonly shared whiteboard. Ad hoc mobile communication is particularly useful in relaying information (status, situation awareness, etc.) via data, video, and/or voice from one rescue team member to another over a small handheld or wearable wireless device. Again, this applies to law enforcement personnel as well. Current challenges for ad hoc wireless networks include: •Multicast BIB002 •QoS support •Power-aware routing BIB001 •Location-aided routing BIB003 As mentioned above, multicast is desirable to support multiparty wireless communications. Since the multicast tree is no longer static (i.e., its topology is subject to change over time), the multicast routing protocol must be able to cope with mobility, including multicast membership dynamics (e.g., leave and join). In terms of QoS, it is inadequate to consider QoS merely at the network level without considering the underlying media access control layer . Again, given the problems associated with the dynamics of nodes, hidden terminals, and fluctuating link characteristics, supporting end-to-end QoS is a nontrivial issue that requires in-depth investigation. Currently, there is a trend toward an adaptive QoS approach instead of the "plain" resource reservation method with hard QoS guarantees. Another important factor is the limited power supply in handheld devices, which can seriously prohibit packet forwarding in an ad hoc mobile environment. Hence, routing traffic based on nodes' power metrics is one way to distinguish routes that are more long-lived than others. Finally, instead of using beaconing or broadcast search, locationaided routing uses positioning information to define associated regions so that the routing is spatially oriented and limited. This is analogous to associativity-oriented and restricted broadcast in ABR. Current ad hoc routing approaches have introduced several new paradigms, such as exploiting user demand, and the use of location, power, and association parameters. Adaptivity and self-configuration are key features of these approaches. However, flexibility is also important. A flexible ad hoc routing protocol could responsively invoke table-driven and/or ondemand approaches based on situations and communication requirements. The "toggle" between these two approaches may not be trivial since concerned nodes must be "in sync" with the toggling. Coexistence of both approaches may also exist in spatially clustered ad hoc groups, with intracluster employing the table-driven approach and intercluster employing the demand-driven approach or vice versa. Further work is necessary to investigate the feasibility and performance of hybrid ad hoc routing approaches. Lastly, in addition to the above, further research in the areas of media access control, security, service discovery, and Internet protocol operability is required before the potential of ad hoc mobile networking can be realized.
Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Importance Sampling <s> From the Publisher: ::: Provides the first simultaneous coverage of the statistical aspects of simulation and Monte Carlo methods, their commonalities and their differences for the solution of a wide spectrum of engineering and scientific problems. Contains standard material usually considered in Monte Carlo simulation as well as new material such as variance reduction techniques, regenerative simulation, and Monte Carlo optimization. <s> BIB001 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Importance Sampling <s> This paperback edition is a reprint of the 2001 Springer edition. This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared. Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians. It can also be used as the textbook for a graduate-level course on Monte Carlo methods. Many problems discussed in the alter chapters can be potential thesis topics for masters or Ph.D. students in statistics or computer science departments. Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department. Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for statisticians and given annually by five leading statistical associations to one individual under age 40. He was selected as a Terman Fellow by Stanford University in 1995, as a Medallion Lecturer by the Institute of Mathematical Statistics (IMS) in 2002, and as a Bernoulli Lecturer by the International Bernoulli Society in 2004. He was elected to the IMS Fellow in 2004 and Fellow of the American Statistical Association in 2005. He and co-workers have published more than 130 research articles and book chapters on Bayesian modeling and computation, bioinformatics, genetics, signal processing, stochastic dynamic systems, Monte Carlo methods, and theoretical statistics. "An excellent survey of current Monte Carlo methods. The applications amply demonstrate the relevance of this approach to modern computing. The book is highly recommended." (Mathematical Reviews) "This book provides comprehensive coverage of Monte Carlo methods, and in the process uncovers and discusses commonalities among seemingly disparate techniques that arose in various areas of application. The book is well organized; the flow of topics follows a logical development. The coverage is up-to-date and comprehensive, and so the book is a good resource for people conducting research on Monte Carlo methods. The book would be an excellent supplementary text for a course in scientific computing ." (SIAM Review) "The strength of this book is in bringing together advanced Monte Carlo (MC) methods developed in many disciplines. Throughout the book are examples of techniques invented, or reinvented, in different fields that may be applied elsewhere. Those interested in using MC to solve difficult problems will find many ideas, collected from a variety of disciplines, and references for further study." (Technometrics) <s> BIB002 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Importance Sampling <s> We provide a short overview of importance sampling—a popular sampling tool used for Monte Carlo computing. We discuss its mathematical foundation and properties that determine its accuracy in Monte Carlo approximations. We review the fundamental developments in designing efficient importance sampling (IS) for practical use. This includes parametric approximation with optimization-based adaptation, sequential sampling with dynamic adaptation through resampling and population-based approaches that make use of Markov chain sampling. Copyright © 2009 John Wiley & Sons, Inc. ::: ::: For further resources related to this article, please visit the WIREs website. <s> BIB003
The basic idea of importance sampling (IS) is to focus on the region(s) of "importance" so as to save computational resources BIB002 . The importance region(s) in reliability problem can be seen as the failure region. The concept underlying the importance sampling (IS) method is to replace the original PDF ( ) X f x with an IS distribution ( ) V h x such that a large number of samples lies in the "important region" of the sample space, i.e., the failure region in reliability problem BIB003 0. Accordingly, the failure probability of Equation (2) can be rewritten as where ( ) V h v is called the importance sampling probability density function (PDF) or instrumental probability density function and This transformation implies that the estimator of the failure probability in Equation (5) becomes where as before. The variance of the estimate of importance sampling f p is given by: It can be seen that a good choice for ( ) V h v can produce smaller variance of f p than that from crude Monte Carlo simulation method. Conversely, the variance can actually be increased when a very poor choice of ( ) V h v is used. By minimizing the above variance, the optimal importance sampling PDF can be given as BIB001 : However, this optimal importance sampling PDF is ineffective in practice, because it requires knowledge of f p priori. Therefore, in practical applications of IS, the optimal importance sampling PDF is approximately obtained and used. In this point, the importance sampling methods in reliability analysis that have been proposed and widely used in the literatures can be grouped into: importance sampling at the design point and adaptive importance sampling.
Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Latin Hypercube Sampling (LHS) <s> From the Publisher: ::: Provides the first simultaneous coverage of the statistical aspects of simulation and Monte Carlo methods, their commonalities and their differences for the solution of a wide spectrum of engineering and scientific problems. Contains standard material usually considered in Monte Carlo simulation as well as new material such as variance reduction techniques, regenerative simulation, and Monte Carlo optimization. <s> BIB001 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Latin Hypercube Sampling (LHS) <s> Basic Concept of Reliability. Mathematics of Probability. Modeling of Uncertainty. Commonly Used Probability Distributions. Determination of Distributions and Parameters from Observed Data. Randomness in Response Variables. Fundamentals of Reliability Analysis. Advanced Topics on Reliability Analysis. Simulation Techniques. Appendices. Conversion Factors. References. Index. <s> BIB002
Generally, in simulation methods, the random sampling method is adopted to generate samples. In order to improve the efficiency of basic MCS, another sampling method called Latin Hypercube Sampling (LHS) was developed . Latin hypercube sampling is a widely-used method to generate controlled random samples and its basic idea is to make sampling point distribution close to the probability density function (PDF). LHS uses a stratified sampling scheme to improve the coverage of the input space. equally probable intervals of j X . In this way, N samples of one-dimensional random variable j X are generated by LHS (seen in Figure 3) . In order to generate N samples of a k -dimensional random vector [ ] , N samples of each component of X is firstly generated by LHS as previously described, then the N values of X are paired in a random manner with values of 2 X , sequentially, these pairs are then paired similarly with values of BIB002 X and so on, until N values of k components of X are formed. The N samples of random vector X form an k N × matrix whose j-th row contains the LHS for the component j X . A random process is used to ensure the random ordering (pairing) of the values within each row of this matrix. This mixing process serves to emulate the paring of observations in a simple Monte Carlo process. LHS ensures that the entire range of each input variable is completely covered. It has been shown that LHS is more efficient than simple random sampling in a large range of conditions . LHS has been widely used in design of experiments and sampling methods for reliability problems owing to its efficiency. Latin Hypercube Sampling (LHS) is also performed to solve this example. The same as MCS, LHS perform with different sample size, from BIB001 1 10 × to 7 1 10 × , and 50 independent simulations are carried out for each sample size. The average probability from 50 simulations at each sample size are reported in Table 3 .
Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Directional Simulation <s> Probability computation methods for structural and mechanical reliability analysis are presented. Methods for random variable, random process and random field reliability models are included, with special emphasis on random variable models. Recent developments are described, assuming the reader to be familiar with earlier methods. The presently available state-of-the-art computation methods are evaluated, and their merits are discussed and compared. <s> BIB001 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Directional Simulation <s> Abstract The theory for the estimation of the reliability of a structural system which is subject to one or more load processes requires knowledge of the surface defining the mechanical response of the system. For many systems this surface may be known only implicitly and defined through a method of structural analysis. In the formulation presented herein, the surface is obtained using an established probabilistic structural analysis technique. The reliability estimation is formulated in the load process space. In this space, directional simulation is employed to estimate the outcrossing rate and the initial (zero time) probability of failure and hence to estimate the probability of failure at any subsequent time. Two examples using discrete rectangular pulse (Poisson) processes are described. <s> BIB002 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Directional Simulation <s> Directional importance sampling is considered in the original space of loads when the loads are highly correlated and/or highly out of proportion. Under these conditions, the joint probability density function contours are elongated and the application of unbiased directional sampling may result in poor and unstable results. It is shown that a major improvement can be achieved by applying importance sampling to the directional sampling technique. In this way better and quite stable results can be obtained. Also, it is shown that uncertainties in structural strength may affect directional sampling in load space. Some illustrative examples are given. <s> BIB003 </s> Overview of Structural Reliability Analysis Methods — Part II: Sampling Methods <s> Directional Simulation <s> Abstract Directional simulation reduces the dimension of the limit state probability integral by identifying a set of directions for integration, integrating either in closed-form or by approximation in those directions, and estimating the probability as a weighted average of the directional integrals. Most existing methods identify these directions by a set of points distributed on the unit hypersphere. The accuracy of the directional simulation depends on how the points are identified. When the limit state is highly nonlinear, or the inherent failure probability is small, a very large number of points may be required, and the method can become inefficient. This paper introduces several new approaches for identifying directions for evaluating the probability integral — Spherical t-design, Spiral Points, and Fekete Points — and compares the failure probabilities with those determined in a number of examples in previously published work. Once these points have been identified for a probability integral of given dimension, they can be used repeatedly for other probability integrals of the same dimension in a fashion analogous to Gauss Quadrature. <s> BIB004
Directional Simulation (DS) reduces the dimension of probability integral by identifying a set of directions for integration and estimating the probability as a weight average of the directional integrals BIB004 . It is based on the concept of conditional probability and it also exploits the symmetry of the standard normal space U BIB001 . The key idea of directional simulation is firstly seeking a set of directions in U-space and then perform the reliability analysis as a sequence of one-dimensional integrations in each direction. The n-dimensional normal vector U can be expressed as A The failure probability f p can be expressed as BIB003 where ( ) f A a is the uniform probability density function of A on the unit sphere Practically, a sequence of N random direction vectors where [ ] 2 n χ ⋅ is the chi-square CDF with n degrees of freedom. Directional simulation eliminates the limitations in situations of nonlinearity of the limit state function or multiple design points. Furthermore, importance directional simulation BIB002 BIB003 , which uses the importance sampling technique to concentrate the direction vectors in the regions of interest, has also proposed for reliability analysis.
Embodied Evolution in Collective Robotics: A Review <s> Introduction <s> The pitfalls of naive robot simulations have been recognised for areas such as evolutionary robotics. It has been suggested that carefully validated simulations with a proper treatment of noise may overcome these problems. This paper reports the results of experiments intended to test some of these claims. A simulation was constructed of a two-wheeled Khepera robot with IR and ambient light sensors. This included detailed mathematical models of the robot-environment interaction dynamics with empirically determined parameters. Artificial evolution was used to develop recurrent dynamical network controllers for the simulated robot, for obstacle-avoidance and light-seeking tasks, using different levels of noise in the simulation. The evolved controllers were down-loaded onto the real robot and the correspondence between behaviour in simulation and in reality was tested. The level of correspondence varied according to how much noise was used in the simulation, with very good results achieved when realistic quantities were applied. It has been demonstrated that it is possible to develop successful robot controllers in simulation that generate almost identical behaviours in reality, at least for a particular class of robot-environment interaction dynamics. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Introduction <s> In this paper we propose an agent-based model of evolutionary algorithms (EAs) which extends seamlessly from concurrent single-host to distributed multi-host installations. Since the model is based on locally executable selection, we focus on the comparison of two selection mechanisms which accomplish with such a restriction: the classical tournament method and a new one called autonomous selection. Using the latter method the population size changes during runtime, hence it is not only interesting as a new selection mechanism, but also from the perspective of scalable networks. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Introduction <s> In this paper we present an evolutionary method that can deal with the specific problem requirements of adaptivity, scalability and robustness. These requirements are increasingly observed in the areas of pervasive and autonomic computing, and the area of collective robotics. For the purpose of this paper, we concentrate on the problem domain of collective robotics, and more specifically on a surveillance task for such a collective. We present the Situated Evolution Method as a viable alternative for classical evolutionary methods specifically for problem domains with the aforementioned requirements. By means of simulation experiments for a surveillance task, we show that our new method does not lose performance in comparison with a classical evolutionary method, and it has the important design and deployment advantage of being adaptive, scalable and robust. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Introduction <s> Several engineering optimization problems like routing, freight transportation, exploration, or layout design in their more complex and realistic versions present a series of characteristics that make them very difficult to solve. Among these we find the absence of centralized updated information about all the variables, due to the spread out nature of the problems or lack of appropriate communications, or the dynamism of real-time operation. In fact, most optimization approaches assume that the problems they address are static, meaning that there is an optimal solution that does not change in time, but this is not always the case and there are problems that require following an optimum that changes in time. Distributed population-based techniques, such as swarms, have provided promising results in this context. They obtain a solution through the concurrent behavior of several adequately constructed processing elements. However, constructing these swarms is not straightforward, and most approaches have just mimicked swarm behaviors found in nature, adapting them to particular problems. The objective of this work is to study the application of a novel evolutionary paradigm, distributed Embodied Evolution (dEE), to obtain heterogeneous swarms that solve a set of realistic problems. In particular, we address here non-separable dynamic fitness landscapes, where interdependences between individuals imply that the contribution provided by one of them to the whole depends on the behavior of the others. This study is carried out applying a canonical version of dEE, which has been developed to generalize the main features of this type of evolutionary paradigm. We analyze the canonical dEE response in a series of scenarios of increasing complexity related to two highly representative dynamic engineering problems: a Dynamic Fleet Size and Mix Vehicle Routing Problem with Time Windows (DFSMVRPTW) and a collective surveillance task with realistic location degradation. <s> BIB004
This paper provides an overview of evolutionary robotics research where evolution takes place in a population of robots in a continuous manner. coined the phrase embodied evolution for evolutionary processes that are distributed over the robots in the population to allow them to adapt autonomously and continuously. Embodied evolution offers a unique opportunity for autonomous on-line adaptivity in robot collectives. The vision behind embodied evolution is one of collectives of truly autonomous robots that can adapt their behaviour to suit varying tasks and circumstances. Autonomy occurs at two levels: not only do the robots perform their tasks without external control, they also assess and adapt -through evolution-their behaviour without referral to external oversight and so learn autonomously. This adaptive capability allows robots to be deployed in situations that cannot be accurately modelled a priori. This may be because the environment or user requirements are not fully known or it may Arxiv Preprint, revision 1, 2017 arXiv:1709.08992v1 [cs.NE] 26 Sep 2017 be due to the complexity of the interactions among the robots as well as with their environment effectively rendering the scenario unpredictable. Also, on-board adaptivity intrinsically avoids the reality gap BIB001 that results from inaccurate modelling of robots or their environment when developing controllers before deployment since controllers develop after deployment. Embodied evolution affords continuous adaptation of controllers: evolution persistently adapts the controllers of the robots that make up the population. Embodied evolution's on-line nature contrasts with 'traditional' evolutionary robotics research where evolution is employed in the classical sequential centralised optimisation paradigm: the 'robotics' part consists of a series of robotic trials (simulated or not) in an evolution-based search for optimal robot controllers Bongard, 2013; . In terms of task performance, embodied evolution has been shown to outperform alternative evolutionary robotic techniques in some setups such as surveillance and self-localisation with flying UAVs BIB003 BIB004 , especially regarding convergence speed. To provide a basis for a clear discussion, we define embodied evolution as a paradigm where evolution is implemented in multi-robotic systems that are: Decentralised There is no central authority that selects parents to produce offspring or individuals to be replaced. Instead, robots assess their performance, exchange and select genetic material autonomously on the basis of locally available information; On-line Robot controllers change on the fly, as the robots go about their proper actions: evolution occurs during the operational lifetime of the robots and in the robots' task environment. The process continues after the robots have been deployed. Parallel Whether they collaborate in their tasks or not, the population consists of multiple robots that perform their actions and evolve in the same environment, during the same period, and that frequently interact with each other to exchange genetic material. The decentralised nature of communicating genetic material implies that selection is executed locally, usually involving only a part of the whole population BIB002 , and that it must be performed by the robots themselves. This adds a third opportunity for selection in addition to parent and survivor selection as defined for classical evolutionary computing. Thus, embodied evolution extends the collection of operators that define an evolutionary algorithm (i.e., evaluation, selection, variation and replacement ) with mating as a key evolutionary operator: Mating An action where two (or more) robots decide to send and/or receive genetic material, whether this material will or will not be used for generating new offspring. When and how this happens depends on pre-defined heuristics, but also on evolved behaviour, the latter determining to a large extent whether robots ever meet to have the opportunity to exchange genetic material. In the last 15 years, on-line evolution in general, and embodied evolution in particular have matured as research fields. This is evidenced by the growing number of relevant publications in respected evolutionary computing venues such as in conferences (e.g. ACM GECCO, ALIFE, ECAL and EvoApplications), journals (e.g. Evolutionary Intelligence's special issue on Evolutionary Robotics ), workshops (PPSN 2014 ER workshop, GECCO 2015 Evolving collective behaviours in robotics workshop) and tutorials (ALIFE 2014 , GECCO 2015 , ECAL 2015 , PPSN 2016 , ICLD-EPIROB 2016 . To date, however, a clear definition of what embodied evolution is (and what it is not) and an overview of the state of the art in this area are not available. This paper provides a definition of the embodied evolution paradigm and relates it to other evolutionary and swarm robotics research (sections 2 and 3). We identify and review relevant research, highlighting many design choices and issues that are particular to the embodied evolution paradigm (sections 4 and 5). Together this provides a thorough overview of the relevant state-of-the-art and a starting point for researchers interested in evolutionary methods for collective autonomous adaptation. Section 6 identifies some open issues as well as research that may provide solutions, suggest directions for future work and discusses potential applications.
Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> The term swarm has been applied to many systems (in biology, engineering, computation, etc.) as they have some of the qualities that the English-language term swarm denotes. With the growth of the various area of swarm research, the swarm terminology has become somewhat confusing. In this paper, we reflect on this terminology to help clarify its association with various robotic concepts. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> RoboCup simulated soccer presents many challenges to reinforcement learning methods, including a large state space, hidden and uncertain state, multiple independent agents learning simultaneously, and long and variable delays in the effects of actions. We describe our application of episodic SMDP Sarsa(λ) with linear tile-coding function approximation and variable λ to learning higher-level decisions in a keepaway subtask of RoboCup soccer. In keepaway, one team, “the keepers,” tries to keep control of the ball for as long as possible despite the efforts of “the takers.” The keepers learn individually when to hold the ball and when to pass to a teammate. Our agents learned policies that significantly outperform a range of benchmark policies. We demonstrate the generality of our approach by applying it to a number of task variations including different field sizes and different numbers of players on each team. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Swarm robotics draws inspiration from decentralized self-organizing biological systems in general and from the collective behavior of social insects in particular. In social insect colonies, many tasks are performed by higher order group or team entities, whose task-solving capacities transcend those of the individual participants. In this paper, we investigate the emergence of such higher order entities. We report on an experimental study in which a team of physical robots performs a foraging task. The robots are "identical" in hardware and control. They make little use of memory and take actions purely on the basis of local information. Our study advances the current state of the art in swarm robotics with respect to the number of real-world robots engaging in teamwork (up to 12 robots in the most challenging experiment). To the best of our knowledge, in this paper we present the first self-organized system of robots that displays a dynamical hierarchy of teamwork (with cooperation also occurring among higher order entities). Our study shows that teamwork requires neither individual recognition nor differences between individuals. This result might also contribute to the ongoing debate on the role of these characteristics in the division of labor in social insects. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> In most swarm systems, agents are either aware of the position of their direct neighbors or they possess a substrate on which they can deposit information (stigmergy). However, such resources are not always obtainable in real-world applications because of hardware and environmental constraints. In this paper we study in 2D simulation the design of a swarm system which does not make use of positioning information or stigmergy. ::: ::: This endeavor is motivated by an application whereby a large number of Swarming Micro Air Vehicles (SMAVs), of fixed-wing configuration, must organize autonomously to establish a wireless communication network (SMAVNET) between users located on ground. Rather than relative or absolute positioning, agents must rely only on their own heading measurements and local communication with neighbors. ::: ::: Designing local interactions responsible for the emergence of the SMAVNET deployment and maintenance is a challenging task. For this reason, artificial evolution is used to automatically develop neuronal controllers for the swarm of homogenous agents. This approach has the advantage of yielding original and efficient swarming strategies. A detailed behavioral analysis is then performed on the fittest swarm to gain insight as to the behavior of the individual agents. <s> BIB004 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions. <s> BIB005 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> This paper investigates a non-traditional sensing trade-off in swarm robotics: one in which each robot has a relatively long sensing range, but processes a minimal amount of information. Aggregation is used as a case study, where randomly-placed robots are required to meet at a common location without using environmental cues. The binary sensor used only lets a robot know whether or not there is another robot in its direct line of sight. Simulation results with both a memoryless controller (reactive) and a controller with memory (recurrent) prove that this sensor is enough to achieve error-free aggregation, as long as a sufficient sensing range is provided. The recurrent controller gave better results in simulation, and a post-evaluation with it shows that it is able to aggregate at least 1000 robots into a single cluster consistently. Simulation results also show that, with the recurrent controller, false negative noise on the sensor can speed up the aggregation process. The system has been implemented on 20 physical e-puck robots, and systematic experiments have been performed with both controllers: on average, 86-89% of the robots aggregated into a single cluster within 10 minutes. <s> BIB006 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in co-operative decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we recently introduced a method for transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. However, scalability remains limited when the number of agents or problem variables becomes large. In this paper, we show that, under certain separability conditions of the optimal value function, the scalability of this approach can increase considerably. This separability is present when there is locality of interaction between agents, which can be exploited to improve performance. Unlike most previous methods, the novel continuous-state MDP algorithm retains optimality and convergence guarantees. Results show that the extension using separability can scale to a large number of agents and domain variables while maintaining optimality. <s> BIB007 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate. <s> BIB008 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading. <s> BIB009 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Swarm intelligence principles have been widely studied and applied to a number of different tasks where a group of autonomous robots is used to solve a problem with a distributed approach, i.e. without central coordination. A survey of such tasks is presented, illustrating various algorithms that have been used to tackle the challenges imposed by each task. Aggregation, flocking, foraging, object clustering and sorting, navigation, path formation, deployment, collaborative manipulation and task allocation problems are described in detail, and a high-level overview is provided for other swarm robotics tasks. For each of the main tasks, (1) swarm design methods are identified, (2) past works are divided in task-specific categories, and (3) mathematical models and performance metrics are described. Consistently with the swarm intelligence paradigm, the main focus is on studies characterized by distributed control, simplicity of individual robots and locality of sensing and communication. Distributed algorithms are shown to bring cooperation between agents, obtained in various forms and often without explicitly programming a cooperative behavior in the single robot controllers. Offline and online learning approaches are described, and some examples of past works utilizing these approaches are reviewed. <s> BIB010 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots. <s> BIB011 </s> Embodied Evolution in Collective Robotics: A Review <s> Off-line Design of Behaviours in Collective Robotics <s> Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC-Model - which builds models of previous teammates' behaviors and plans behaviors online using these models and 2) PLASTIC-Policy - which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates' behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates were created by a variety of independent developers and were not designed to share any similarities. Nonetheless, the results show that PLASTIC was able to identify and exploit similarities between its current and past teammates' behaviors, allowing it to quickly adapt to new teammates. <s> BIB012
Decentralised decision-making is a central theme in collective robotics research: when the robot collective cannot be centrally controlled, the individual robots' behaviour must be carefully designed so that global coordination occurs through local interactions. Seminal works from the 1990s such as Mataric's Nerd Herd (1994) addressed this problem by hand-crafting behaviour-based control architectures. Manually designing robot behaviours has since been extended with elaborate methodologies and architectures for multi-robot control (see BIB009 for a review) and with a plethora of bio-inspired control rules for swarm-like collective robotics (see BIB003 for recent examples involving real robots, and BIB001 ; BIB005 ; BIB010 for discussions and recent reviews). Automated design methods have been explored with the hope of tackling problems of greater complexity. Early examples of this approach were applied to the robocup challenge for learning coordination strategies in a well-defined setting. See for an early review and BIB002 and BIB012 for more recent work in this vein. However, demonstrated that solving even the most simple multi-agent learning problem is NEXP-complete, so obtaining an optimal solution in reasonable time is infeasible. Recent works in reinforcement learning have developed theoretical tools to break down complexity by operating a move from considering many agents to a collection of single agents, each of which being optimised separately BIB007 , leading to theoretically well-founded contributions, but with limited practical validation involving very few robots and simple tasks BIB008 . Lacking a theoretical foundation, but instead based on experimental validation, swarm robotic controllers have been developed with black-box optimisation methods ranging from brute-force optimisation using a simplified (hence tractable) representation of a problem and evolutionary robotics BIB004 BIB006 BIB011 . The methods vary, but all the approaches described here (including 'standard' evolutionary robotics) share a common goal: to design or optimise a set of control rules for autonomous robots that are part of a collective before the actual deployment of the robots. The particular challenge in this kind of work is to design individual behaviours that lead to some required global ('emergent') behaviour without the need for central oversight.
Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> This paper is concerned with adaptation capabilities of evolved neural controllers. We propose to evolve mechanisms for parameter self-organization instead of evolving the parameters themselves. The method consists of encoding a set of local adaptation rules that synapses follow while the robot freely moves in the environment. In the experiments presented here, the performance of the robot is measured in environments that are different in significant ways from those used during evolution. The results show that evolutionary adaptive controllers solve the task much faster and better than evolutionary standard fixed-weight controllers, that the method scales up well to large architectures, and that evolutionary adaptive controllers can adapt to environmental changes that involve new sensory characteristics (including transfer from simulation to reality and across different robotic platforms) and new spatial relationships. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet, if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time Neuroevolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the Neuroevolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> Abstract Evolutionary robotics (ER) is a field of research that applies artificial evolution toward the automatic design and synthesis of intelligent robot controllers. The preceding decade saw numerous advances in evolutionary robotics hardware and software systems. However, the sophistication of resulting robot controllers has remained nearly static over this period of time. Here, we make the case that current methods of controller fitness evaluation are primary factors limiting the further development of ER. To address this, we define a form of fitness evaluation that relies on intra-population competition. In this research, complex neural networks were trained to control robots playing a competitive team game. To limit the amount of human bias or know-how injected into the evolving controllers, selection was based on whether controllers won or lost games. The robots relied on video sensing of their environment, and the neural networks required on the order of 150 inputs. This represents an order of magnitude increase in sensor complexity compared to other research in this field. Evolved controllers were tested extensively in real fully-autonomous robots and in simulation. Results and experiments are presented to characterize the training process and the acquisition of controller competency under different evolutionary conditions. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> This paper reports on a feasibility study into the evolution of robot controllers during the actual operation of robots (on-line), using only the computational resources within the robots themselves (on-board). We identify the main challenges that these restrictions imply and propose mechanisms to handle them. The resulting algorithm is evaluated in a hybrid system, using the actual robots' processors interfaced with a simulator that represents the environment. The results show that the proposed algorithm is indeed feasible and the particular problems we encountered during this study give hints for further research. <s> BIB004 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> In this paper we study distributed online learning of locomotion gaits for modular robots. The learning is based on a stochastic approximation method, SPSA, which optimizes the parameters of coupled oscillators used to generate periodic actuation patterns. The strategy is implemented in a distributed fashion, based on a globally shared reward signal, but otherwise utilizing local communication only. In a physics-based simulation of modular Roombots robots we experiment with online learning of gaits and study the effects of: module failures, different robot morphologies, and rough terrains. The experiments demonstrate fast online learning, typically 5-30 min. for convergence to high performing gaits (≅ 30 cm/sec), despite high numbers of open parameters (45-54). We conclude that the proposed approach is efficient, effective and a promising candidate for online learning on many other robotic platforms. <s> BIB005 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> We propose and evaluate a novel approach called Online Distributed NeuroEvolution of Augmenting Topologies (odNEAT). odNEAT is a completely distributed evolutionary algorithm for online learning in groups of embodied agents such as robots. While previous approaches to online distributed evolution of neural controllers have been limited to the optimisation of weights, odNEAT evolves both weights and network topology. We demonstrate odNEAT through a series of simulation-based experiments in which a group of e-puck-like robots must perform an aggregation task. Our results show that robots are capable of evolving effective aggregation strategies and that sustainable behaviours evolve quickly. We show that odNEAT approximates the performance of rtNEAT, a similar but centralised method. We also analyse the contribution of each algorithmic component on the performance through a series of ablation studies. <s> BIB006 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. <s> BIB007 </s> Embodied Evolution in Collective Robotics: A Review <s> Lifelong Learning in Evolutionary Robotics <s> This chapter introduces a hierarchy of concepts to classify the goals and the methods used in articles that mix neuro-evolution and synaptic plasticity. We propose definitions of “behavioral robustness” and oppose it to “reward-based behavioral changes”; we then distinguish the switch between behaviors and the acquisition of new behaviors. Last, we formalize the concept of “synaptic General Learning Abilities” (sGLA) and that of “synaptic Transitive learning Abilities (sTLA)”. For each concept, we review the literature to identify the main experimental setups and the typical studies. <s> BIB008
It has long been argued that deploying robots in the real world may benefit from continuing to acquire new capabilities after initial deployment BIB003 , especially if the environment is not known beforehand. Therefore, the question we are concerned with in this paper is how to endow a collective robotics system with the capability to perform lifelong learning. Evolutionary robotics research into this question typically focuses on individual autonomous robots. Early works in evolutionary robotics that considered lifelong learning explored learning mechanisms to cope with minor environmental changes (see the classic book from as well as BIB001 and BIB007 for examples, and BIB008 for a nomenclature). More recently, and addressed resilience by introducing fast on-line re-optimisation to recover from hardware damage. BIB004 , BIB005 and BIB006 are some examples of on-line versions of evolutionary robotics algorithms that target the fully autonomous acquisition of behaviour to achieve some pre-defined task in individual robots. Targeting agents in a video game rather than robots, BIB002 tackled the on-line evolution of controllers in a multi-agent system. Because the agents were virtual, the researchers could control some aspects of the evaluation conditions (e.g., restarting the evaluation of agents from the same initial position). This kind of control is typically not feasible in autonomously deployed robotic systems. Embodied evolution builds on evolutionary robotics to implement lifelong learning in robot collectives. Its clear link with traditional evolutionary robotics is exemplified by work like that by , where a traditional evolutionary algorithm is encapsulated on each robot. Individual controllers are evaluated sequentially in a standard time sharing set-up, and the robots implement a communication scheme that resembles an island model to exchange genomes from one robot to another. It is this communication that makes this an instance of embodied evolution.
Embodied Evolution in Collective Robotics: A Review <s> Replacement <s> We introduce Embodied Evolution (EE) as a new methodology for evolutionary robotics (ER). EE uses a population of physical robots that autonomously reproduce with one another while situated in their task environment. This constitutes a fully distributed evolutionary algorithm embodied in physical robots. Several issues identified by researchers in the evolutionary robotics community as problematic for the development of ER are alleviated by the use of a large number of robots being evaluated in parallel. Particularly, EE avoids the pitfalls of the simulate-and-transfer method and allows the speed-up of evaluation time by utilizing parallelism. The more novel features of EE are that the evolutionary algorithm is entirely decentralized, which makes it inherently scalable to large numbers of robots, and that it uses many robots in a shared task environment, which makes it an interesting platform for future work in collective robotics and Artificial Life. We have built a population of eight robots and successfully implemented the first example of Embodied Evolution by designing a fully decentralized, asynchronous evolutionary algorithm. Controllers evolved by EE outperform a hand-designed controller in a simple application. We introduce our approach and its motivations, detail our implementation and initial results, and discuss the advantages and limitations of EE. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Replacement <s> Artificial evolution plays an important role in several robotics projects. Most commonly, an evolutionary algorithm (EA) is used as a heuristic optimiser to solve some engineering problem, for instance an EA is used to find good robot controller. In these applications the human designers/experimenters orchestrate and manage the whole evolutionary problem solving process and incorporate the end result –that is, the (near-)optimal solution evolved by the EA– into the system as part of the deployment. During the operational period of the system the EA does not play any further role. In other words, the use of evolution is restricted to the pre-deployment stage. Another, more challenging type of application of evolution is where it serves as the engine behind adaptation during (rather than before) the operational period, without human intervention. In this section we elaborate on possible evolutionary approaches to this kind of applications, position these on a general feature map and test some of these set-ups experimentally to assess their feasibility. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Replacement <s> Imagine autonomous, self-sufficient robot collectives that can adapt their controllers autonomously and self-sufficiently to learn to cope with situations unforeseen by their designers. As one step towards the realisation of this vision, we investigate on-board evolutionary algorithms that allow robot controllers to adapt without any outside supervision and while the robots perform their proper tasks. We propose an evag-based on-board evolutionary algorithm, where controllers are exchanged among robots that evolve simultaneously. We compare it with the (μ+1) on-line algorithm, which implements evolutionary adaptation inside a single robot. We perform simulation experiments to investigate algorithm performance and use parameter tuning to evaluate the algorithms at their best possible parameter settings. We find that distributed on-line on-board evolutionary algorithms that share genomes among robots such as our evag implementation effectively harness the pooled learning capabilities, with an increasing benefit over encapsulated approaches as the number of participating robots grows. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Replacement <s> Embodied evolutionary robotics is a particular flavour of evolutionary robotics, where the evolutionary optimization of behaviours is achieved in an on-line and distributed fashion (Watson et al., 2002). The question asked in this paper is: does population size play a role in the evolution of particular behaviours? We experimentaly demonstrate that varying the number of robots and the size of the environment can lead to very different outcomes in terms of evolved behaviours. Figure 1: Experimental setup: a population of robots with 8 infra-red (IR) sensors (shown in blue) is deployed in an environment where 10 (yellow) landmarks are randomly placed. The robots are modelled after the famous e-puck robot, and communication between robots is achieved through the IR devices. The red tail is visible to the user only (used for identifying directions). <s> BIB004 </s> Embodied Evolution in Collective Robotics: A Review <s> Replacement <s> It is well known that in open-ended evolution, the nature of the environment plays in key role in directing evolution. However, in Evolutionary Robotics, it is often unclear exactly how parameterisation of a given environment might influence the emergence of particular behaviours. We consider environments in which the total amount of energy is parameterised by availability and value, and use surface plots to explore the relationship between those environment parameters and emergent behaviour using a variant of a well-known distributed evolutionary algorithm (mEDEA). Analysis of the resulting landscape show that it is crucial for a researcher to select appropriate parameterisations in order that the environment provides the right balance between facilitating survival and exerting sufficient pressure for new behaviours to emerge. To the best of our knowledge, this is the first time such an analysis has been undertaken. <s> BIB005
The currently active genome is replaced by a new individual (the offspring), implying the removal of the current genome. This event can be triggered by a robot's internal conditions (e.g. running out of time or virtual energy, reaching a given performance level) or through interactions with other robots (e.g., receiving promising genetic material BIB001 ). Parent selection This is the process that selects which genetic information will be used for the creation of new offspring from the received genetic information through mating events. When an objective is defined, the performance of the received genome is usually the basis for selection, just as in regular evolutionary computing. In other cases, the selection among received genomes can be random or depend on non-performance related heuristics (e.g., random, genotypic proximity, etc.). Variation A new genome is created by applying the variation operators (mutation and crossover) on the selected parent genome(s). This is subsequently activated to replace the current controller. Figure 1 : The overlapping robot-centric and genome-centric cycles in embodied evolution. The robot-centric cycle uses a single active genome that determines the current robot behaviour (sense-act loop), the genome-centric cycle manages an internal reservoir of genomes received from other robots or built locally (parent selection / variation), out of which the next active genome will be selected eventually (replacement). From a conceptual perspective, embodied evolution can be analysed at two levels which are represented by two intertwined cycles as depicted in Fig.1 : The robot-centric cycle is depicted on the right in Fig.1 . It represents the physical interactions that occur between the robot and its environment, including interactions with other robots and extends this sense-act loop commonly used to describe real-time control systems by accommodating the exchange and activation of genetic material. At these two points, the genome-centric and robot-centric cycles overlap. The cycle operates grosso modo as follows: each robot is associated to an active genome, the genome is interpreted into a set of features and control architecture (the phenotype) which produces a behaviour which includes the transmission of its own genome to some other robots. Each robot eventually switches from an active genome to another, depending on a specific event (e.g. minimum energy threshold) or duration (e.g. fixed lifetime), and consequently changes its active genome, probably impacting its behaviour; The genome-centric cycle deals with the events that directly affect the genomes existing in the robot population and therefore also the evolution per se. Again, the mating and the renewal are the events which overlap with the robot-centric cycle. The operation from the genome cycle perspective is as follows: each robot starts with an initial genome, either initialised randomly or a priori defined. While this genome is active it determines the phenotype of the robot, hence its behaviour. Afterwards, when the renewal is triggered, some genomes are selected from the content of the reservoir of genomes previously received according to the parent selection criteria and later combined using the variation operators. This new genome will then become part of the population. In the case of fixed size population algorithms, the renewal will automatically trigger the removal of the old genome producing which is usually considered as a replacement event (renewal + removal). In some other cases, however, there is a specific criterion to trigger the removal event producing populations of individuals which change its size along the evolution. The two circles connect on several stages, firstly by the 'exchange genomes' (or mating) process which implies the transmission of genetic material, possibly together with additional information (fitness if available, general performance, genetic affinity, etc.) to modulate the future selection. Generally, the received information is stored to be used (in full or in part) to replace the active genome in the later parent selection process. Therefore, the event is triggered and modulated by the robot cycle but it impacts on the genomic cycle. Also, the decentralised nature of the paradigm enforces that these transmissions occur locally, either one-to-one or to any robot in a limited range. There are several ways in which mate selection can be implemented, for instance, individuals may send and receive genomic information indiscriminately within a certain location range or the frequency of transmission can depend on the task performance. The second overlap between the two cycles is the activation of new genomic information (renewal). The activation of a genome in the genomic-cycle produces that the new one takes control of the robot and therefore changes the response of the robot in the scenario (in EC terms this event will mark the start of a new individual evaluation). This aspect is what creates the on-line character of the algorithm which, together with the locality constrains, implies that the process is also asynchronous. BIB002 proposes a taxonomy for on-line evolution that differentiates between encapsulated, distributed and hybrid schemes. In these terms, most embodied evolution implementations are distributed, but hybrid implementations can also fit within this category. In those cases, the robot locally maintains a population that is augmented through mating (rather like an island model in parallel evolutionary algorithms). Encapsulated implementations are not considered in this overview, because there, evolution is isolated within individual robots and does not rely on mating inter-action between multiple robots that together form a population. Table 1 provides an overview of published research on embodied evolution with robot collectives. Each entry describes a contribution, which may cover several papers. The entries are described in terms of their implementation details, the robot behaviour, experimental settings, mating conditions, selection and replacement schemes. The glossary (table 2) provides an explanation of these features in more detail. Genomes can have a fixed lifetime, variable lifetime or limited lifetime (similar to variable lifetime, but with an upper bound). Event-based replacement schemes do not depend on time, but on events such as reception of genetic material (e.g. in the microbial GA used by BIB001 ). Several clusters and trends can be distinguished on the basis of table 1. The first distinction we identify is between research that considers embodied evolution as a parallel search method for optimising individual behaviours and research where embodied evolution is employed to craft collective behaviour in robot populations. Research into embodied evolution of collective behaviour has emerged relatively recently and since then has gained importance (17 papers since 2009). This seems to indicate a growing trend. We also review the homogeneity of the evolving population; borrowing definitions from biology, we use the term monomorphic (resp. polymorphic) for a population containing one (resp. more than one) class of genotype, for instance to achieve specialisation. A monomorphic population implies that individuals will behave in a similar manner (except from small variations due to minor genetic differences). On the contrary, polymorphic populations host multiple groups of individuals, each group with its particular genotypic signature, possibly displaying a specific behaviour. Research to date shows that cooperation in monomorphic populations can be easily achieved A notable number of contributions employ real robots. Since the first experiments in this field, the intrinsic on-line nature of embodied evolution has made such validation comparatively straightforward BIB001 . 'Traditional' evolutionary robotics is more concerned with robustness at the level of the evolved behaviour (implied from the reality gap problem) than is embodied evolution, which emphasises the design of robust algorithms, where transfer between simulation and real world may be less problematic. In the contributions presented here, simulation is used for extensive analysis that could hardly take place with real robots due to time or economic constraint. Still, it is important to note that many researchers who use simulation have also published works with real robots, thus including real world validation in their research methodology. Since 2010, there has been a number of experiments that employ large (>= 100) numbers of (simulated) robots, shifting towards more swarm-like robotics where evolutionary dynamics can be quite different BIB003 BIB004 . Recent works in this vein focus on the nature of selection pressure, emphasising the unique aspect of embodied evolution that selection pressure results from both the environment (which impacts mating) and the task. It has been shown that environmental pressure alone can drive evolution towards self-sustaining behaviours BIB004 Montanier, 2010, 2012) , and that the trade-off between these aspects can be, to some extent, modulated BIB005 .
Embodied Evolution in Collective Robotics: A Review <s> Local Selection <s> In this paper we present an evolutionary method that can deal with the specific problem requirements of adaptivity, scalability and robustness. These requirements are increasingly observed in the areas of pervasive and autonomic computing, and the area of collective robotics. For the purpose of this paper, we concentrate on the problem domain of collective robotics, and more specifically on a surveillance task for such a collective. We present the Situated Evolution Method as a viable alternative for classical evolutionary methods specifically for problem domains with the aforementioned requirements. By means of simulation experiments for a surveillance task, we show that our new method does not lose performance in comparison with a classical evolutionary method, and it has the important design and deployment advantage of being adaptive, scalable and robust. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Local Selection <s> We investigate on-line on-board evolution of robot controllers based on the so-called hybrid approach (island-based). Inherently to this approach each robot hosts a population (island) of evolving controllers and exchanges controllers with other robots at certain times. We compare different exchange (migration) policies in order to optimize this evolutionary system and compare the best hybrid setup with the encapsulated and distributed alternatives. We conclude that adding a difference-based migrant selection scheme increases the performance. <s> BIB002
In embodied evolution, the evolutionary process is generally implemented through local interactions between the robots, i.e., the mating operation introduced above. This implies the concept of a neighbourhood from which mates are selected. One common way to define neighbourhood is to consider robots within communication range, but it can also be defined in terms of other distance measures such as genotypic or phenotypic distance. Mates are selected by sampling from this neighbourhood and a new individual is created by applying variation operators to the sampled genome(s). This local interaction has its origin in constraints that derive from communication limitations in some distributed robotic scenarios. BIB001 showed it to be beneficial in simulated set-ups as an exploration / exploitation balancing mechanism. Embodied evolution, with chance encounters providing the sampling mechanism, has some similarities with other flavours of evolutionary computation. Cellular evolutionary algorithms consider continuous random rewiring of a network topology (in a grid of CPUs or computers) where all elements are evaluated in parallel. In this context, locally selecting candidates for reproduction is a recurring theme that is shared with embodied evolution (e.g. BIB002 ).
Embodied Evolution in Collective Robotics: A Review <s> Objective Functions vs Selection Pressure <s> Embodied evolutionary robotics is a sub-field of evolutionary robotics that employs evolutionary algorithms on the robotic hardware itself, during the operational period, i.e., in an on-line fashion. This enables robotic systems that continuously adapt, and are therefore capable of (re-)adjusting themselves to previously unknown or dynamically changing conditions autonomously, without human oversight. This paper addresses one of the major challenges that such systems face, viz. that the robots must satisfy two sets of requirements. Firstly, they must continue to operate reliably in their environment (viability), and secondly they must competently perform user-specified tasks (usefulness). The solution we propose exploits the fact that evolutionary methods have two basic selection mechanisms–survivor selection and parent selection. This allows evolution to tackle the two sets of requirements separately: survivor selection is driven by the environment and parent selection is based on task-performance. This idea is elaborated in the Multi-Objective aNd open-Ended Evolution (monee) framework, which we experimentally validate. Experiments with robotic swarms of 100 simulated e-pucks show that monee does indeed promote task-driven behaviour without compromising environmental adaptation. We also investigate an extension of the parent selection process with a ‘market mechanism’ that can ensure equitable distribution of effort over multiple tasks, a particularly pressing issue if the environment promotes specialisation in single tasks. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Objective Functions vs Selection Pressure <s> The MONEE framework endows collective adaptive robotic systems with the ability to combine environment- and task-driven selection pressures: it enables distributed online algorithms for learning behaviours that ensure both survival and accomplishment of user-defined tasks. This paper explores the trade-off between these two requirements that evolution must establish when the task is detrimental to survival. To this end, we investigate experiments with populations of 100 simulated robots in a foraging task scenario where successfully collecting resources negatively impacts an individual's remaining lifetime. We find that the population remains effective at the task of collecting pucks even when the negative impact of collecting a puck is as bad as halving the remaining lifetime. A quantitative analysis of the selection pressures reveals that the task-based selection exerts a higher pressure than the environment. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Objective Functions vs Selection Pressure <s> It is well known that in open-ended evolution, the nature of the environment plays in key role in directing evolution. However, in Evolutionary Robotics, it is often unclear exactly how parameterisation of a given environment might influence the emergence of particular behaviours. We consider environments in which the total amount of energy is parameterised by availability and value, and use surface plots to explore the relationship between those environment parameters and emergent behaviour using a variant of a well-known distributed evolutionary algorithm (mEDEA). Analysis of the resulting landscape show that it is crucial for a researcher to select appropriate parameterisations in order that the environment provides the right balance between facilitating survival and exerting sufficient pressure for new behaviours to emerge. To the best of our knowledge, this is the first time such an analysis has been undertaken. <s> BIB003
In traditional evolutionary algorithms, the optimisation process is guided by a (set of) objective function(s) . Evaluation of the candidate solutions, i.e., of the genomes in the population, allows for (typically numerical) comparison of their performance. Beyond its relevance for performance assessment, the evaluation process per se has generally no influence on the manner in which selection, variation and replacement evolutionary operators are applied. This is different in embodied evolution, where the behaviour of an individual can directly impact the likelihood of encounters with others and so influence selection and reproductive success. Evolution can improve task performance, but it can also develop mating strategies, for example by maximising the number of encounters between robots if that improves the likelihood of transmitting genetic material. It is therefore important to realise that the selection pressure on the robot population does not only derive from the specified objective function(s) as it traditionally does in evolutionary computation. In embodied evolution, the environment, including the mechanisms that allow mating, also exert selection pressure. Consequently, evolution experiences selection pressure from the aggregate of objective function(s) and environmental particularities. BIB003 researched how aspects of the robots' environment influence the emergence of particular behaviours and the balance between pressure towards survival and task. The objective may even pose requirements that are opposed to those by the environment BIB001 . This can be the case when a task implies risky behaviours, or because a task requires resources that are also needed for survival and mating. In such situations, the evolutionary process must establish a trade-off between objective-driven optimisation and the maintenance of a viable environment where evolution occurs, which is a challenge in itself BIB002 .
Embodied Evolution in Collective Robotics: A Review <s> Autonomous Performance Evaluation <s> We present a novel evolutionary approach to robotic control of a real robot based on genetic programming (GP). Our approach uses GP techniques that manipulate machine code to evolve control programs for robots. This variant of GP has several advantages over a conventional GP system, such as higher speed, lower memory requirements, and better real-time properties. Previous attempts to apply GP in robotics use simulations to evaluate control programs and have difficulties with learning tasks involving a real robot. We present an on-line control method that is evaluated in two different physical environments and applied to two tasks—obstacle avoidance and object following—using the Khepera robot platform. The results show fast learning and good generalization. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Autonomous Performance Evaluation <s> This paper surveys the emerging science of how to design a “COllective INtelligence” (COIN). A COIN is a large multi-agent system where: i) There is little to no centralized communication or control. ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. The conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather, that approach begins with explicit modeling of the dynamics of the overall system, followed by detailed hand-tuning of the interactions between the components to ensure that they “cooperate” as far as the world utility is concerned. This approach is labor-intensive, often results in highly nonrobust systems, and usually results in design techniques that have limited applicability. In contrast, we wish to solve the COIN design problem implicitly, via the “adaptive” character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess’s paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur’s El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed science of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Autonomous Performance Evaluation <s> A central aim of robotics research is to design robots that can perform in the real world; a real world that is often highly changeable in nature. An important challenge for researchers is therefore to produce robots that can improve their performance when the environment is stable, and adapt when the environment changes. This paper reports on experiments which show how evolutionary methods can provide lifelong adaptation for robots, and how this evolutionary process was embodied on the robot itself. A unique combination of training and lifelong adaptation are used, and this paper highlights the importance of training to this approach. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Autonomous Performance Evaluation <s> This paper reports on a feasibility study into the evolution of robot controllers during the actual operation of robots (on-line), using only the computational resources within the robots themselves (on-board). We identify the main challenges that these restrictions imply and propose mechanisms to handle them. The resulting algorithm is evaluated in a hybrid system, using the actual robots' processors interfaced with a simulator that represents the environment. The results show that the proposed algorithm is indeed feasible and the particular problems we encountered during this study give hints for further research. <s> BIB004
The decentralised nature of the evolutionary process implies that there is no omniscient presence who knows (let alone determines) the fitness values of all individuals. Consequently, when an objective function is defined, it is the robots themselves that must gauge their performance, and share it with other robots when mating: each robot must have an evaluation function that can be computed on-board and autonomously. The requirement of autonomous assessment does not fundamentally change the way one defines fitness functions, but it does impact their usage as shown by BIB001 ; BIB003 ; BIB004 and BIB002 .
Embodied Evolution in Collective Robotics: A Review <s> Research Agenda <s> In the Hamadryas baboon, males are substantially larger than females. A troop of baboons is subdivided into a number of ‘one-male groups’, consisting of one adult male and one or more females with their young. The male prevents any of ‘his’ females from moving too far from him. Kummer (1971) performed the following experiment. Two males, A and B, previously unknown to each other, were placed in a large enclosure. Male A was free to move about the enclosure, but male B was shut in a small cage, from which he could observe A but not interfere. A female, unknown to both males, was then placed in the enclosure. Within 20 minutes male A had persuaded the female to accept his ownership. Male B was then released into the open enclosure. Instead of challenging male A , B avoided any contact, accepting A’s ownership. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Research Agenda <s> We present a general framework for modeling adaptive traitdynamics in which we integrate various concepts and techniques from modern ESS-theory. The concept of evolutionarily singular strategies is introduced as a generalization of the ESS-concept. We give a full classification of the singular strategies in terms of ESS-stability, convergence stability, the ability of the singular strategy to invade other populations if initially rare itself, and the possibility of protected dimorphisms occuring within the singular strategy's neighborhood. Of particular interest is a type of singular strategy that is an evolutionary attractor from a large distance, but once in its neighborhood a population becomes dimorphic and undergoes disruptive selection leading to evolutionary branching. Modelling the adaptive growth and branching of the evolutionary tree thus can be considered as a major application of the framework. A haploid version of Levene's 'soft selection model is developed as a specific example in order to demonstrate evolutionary dynamics and branching in monomorphic and polymorphic populations. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Research Agenda <s> We provide a first step towards the standardization of Embodied Evolution.We analyze the most representative implementations of the field.From this analysis we develop a canonical Embodied Evolution algorithm.We define a set of theoretical representative distributed optimization problems.A sensitivity analysis of the algorithm is performed. The Embodied Evolution (EE) paradigm arose in the early 2000s as a response to the automatic design of distributed control systems in real time for teams of autonomous robots. The interest for this type of evolutionary approach has been increasing steadily, not only in its native field of robotics, but also in other fields related to distributed optimization problems since previous works have shown its capability to outperform traditional evolutionary techniques when the scenario requires an on-line coordination of the team. Most of the activity in this research field has been eminently practical, meaning that authors have focused their efforts on developing EE algorithms and variations adapted to solve very specific practical cases. The problem that arises is that, on one hand, all these dissimilar variations of the basic EE structure produce an unclear state of the art and, on the other, that there is a high dependence between the performance obtained by the algorithms and the specific problems where they have been tested, which complicates extrapolating conclusions to different scenarios. As a consequence, this work has two main objectives, namely, designing and implementing a standard EE algorithm that captures the more general principles of this paradigm and that can be applied to any distributed optimization problem, and analyzing how its parameters influence the performance of the algorithms in a set of theoretical representative problems so that objective and reliable conclusions about the behavior of EE can be obtained. At the same time, this work presents an analysis of the evaluation criteria required for coordination tasks when using decentralized distributed approaches, which has influenced both, the definition of the algorithm and the selection of experimental set to test it. <s> BIB003
We identify a number of open issues that need to be addressed so that embodied evolution can develop into a relevant technique to enable on-line adaptivity of robot collectives. Some of these issues have been researched in other fields (e.g., credit assignment is a well-known and often considered topic in reinforcement learning research). Lessons can and should be learned from there, inspiring embodied evolution research into the relevance and applicability of findings in those other fields. In particular, we identify the following challenges: Benchmarks The pseudo-code in section 3 provides a clarification of embodied evolution's concepts by describing the basic building blocks of the algorithm. This is only a first step towards a theoretical and practical framework for embodied evolution. Some authors have already taken steps in this direction. For instance, BIB003 propose an abstract algorithmic model in order to study both general and specific properties of embodied evolution implementations. described 'vanilla' versions of embodied evolution algorithms that can be used as practical benchmarks. Further exploration of abstract models for theoretical validation is needed. Also, standard benchmarks and test cases are required to provide a solid basis for empirical validation of individual contributions. Evolutionary Dynamics Embodied evolution requires new tools for analysing the evolutionary dynamics at work. Because the evolutionary operators apply in situ, the dynamics of the evolutionary process are not only important in the context of understanding or improving an optimisation procedure, but they also have a direct bearing on how the robots behave and change their behaviour when deployed. To some extent, this need for tools may be addressed by the application of common analyses from population genetics, which provides techniques for estimating the selection pressure compared to genetic drift possibly occurring in finite-sized populations (see, for instance, and for a comprehensive introduction). Similarly, evolutionary game theory BIB001 and adaptive dynamics BIB002 can model frequency dependent selection and may be used to investigate the dynamics of embodied evolution algorithms.
Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> This article lists fourteen open problems in artificial life, each of which is a grand challenge requiring a major advance on a fundamental issue for its solution. Each problem is briefly explained, and, where deemed helpful, some promising paths to its solution are indicated. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> From the Publisher: ::: "Broad in scope, thorough yet accessible, this book is a self-contained introduction to self-organization and complexity in biology - a field of study at the forefront of life sciences research."--BOOK JACKET. <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> In recent years, the concept of self-organization has been used to understand collective behaviour of animals. The central tenet of self-organization is that simple repeated interactions between individuals can produce complex adaptive patterns at the level of the group. Inspiration comes from patterns seen in physical systems, such as spiralling chemical waves, which arise without complexity at the level of the individual units of which the system is composed. The suggestion is that biological structures such as termite mounds, ant trail networks and even human crowds can be explained in terms of repeated interactions between the animals and their environment, without invoking individual complexity. Here, I review cases in which the self-organization approach has been successful in explaining collective behaviour of animal groups and societies. Ant pheromone trail networks, aggregation of cockroaches, the applause of opera audiences and the migration of fish schools have all been accurately described in terms of individuals following simple sets of rules. Unlike the simple units composing physical systems, however, animals are themselves complex entities, and other examples of collective behaviour, such as honey bee foraging with its myriad of dance signals and behavioural cues, cannot be fully understood in terms of simple individuals alone. I argue that the key to understanding collective behaviour lies in identifying the principles of the behavioural algorithms followed by individual animals and of how information flows between the animals. These principles, such as positive feedback, response thresholds and individual integrity, are repeatedly observed in very different animal societies. The future of collective behaviour research lies in classifying these principles, establishing the properties they produce at a group level and asking why they have evolved in so many different and distinct natural systems. Ultimately, this research could inform not only our understanding of animal societies, but also the principles by which we organize our own society. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> One of the conspicuous features of life is the persistent motion of creatures. Organisms move for many reasons; examples range from foraging through migration to escaping from a predator. Importantly in most of the cases, these organisms move together making use of the various advantages of staying <s> BIB004 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems. <s> BIB005 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolution of Social Complexity <s> Artificial evolution of physical systems is a stochastic optimization method in which physical machines are iteratively adapted to a target function. The key for a meaningful design optimization is the capability to build variations of physical machines through the course of the evolutionary process. The optimization in turn no longer relies on complex physics models that are prone to the reality gap, a mismatch between simulated and real-world behavior. We report model-free development and evaluation of phenotypes in the artificial evolution of physical systems, in which a mother robot autonomously designs and assembles locomotion agents. The locomotion agents are automatically placed in the testing environment and their locomotion behavior is analyzed in the real world. This feedback is used for the design of the next iteration. Through experiments with a total of 500 autonomously built locomotion agents, this article shows diversification of morphology and behavior of physical robots for the improvement of functionality with limited resources. <s> BIB006
Nature abounds with examples of social complexity: from cooperation to division of labour, from signalling to social organisation BIB003 . As shown in Section 4, embodied evolution demonstrated so far only a limited set of social organisation: simple cooperative and division of labour behaviours. In order to address more complex tasks, we must first get a better understanding of the mechanisms required to achieve complex collective behaviours. This raises two questions. First, there is an ethological question: what are the behavioural mechanisms at work in complex collective behaviours? Some of them, such as the importance of positive and negative feedbacks between individuals, or of indirect communication through the environment (i.e., stigmergy), are well known from examples found both in biology BIB002 and theoretical physics BIB004 . Secondly, there is an evolutionary dynamics question: what are the key elements that make it possible to evolve collective behaviours, and what are their limits? Again, evolutionary ecology provides relevant insights, such as the interplay between the level of cooperation and relatedness between individuals . The literature on such phenomenons in biological systems may provide a good basis for research into the evolution of social complexity in embodied evolution. Open-ended adaptation As stated in Section 2, embodied evolution aims to provide continuous adaptation. As a distant milestone, we could reformulate that into providing open-ended adaptation, i.e., the ability to keep exploring new behavioural patterns forever, possibly constructing more and more complex solutions. BIB001 , , and others identified open-ended adaptation in artificial evolutionary systems as one of the big questions of artificial life, but a clear definition is lacking in embodied evolution, in particular because both task-driven and environment-driven selection pressures must be considered. It may be useful to distinguish between the exploitation/exploration trade-off as applied to improving existing behaviours, which relates to task-driven optimization, and pure innovation, which relates to investigating unknown regions of the behaviour space, whether or not this directly benefits solving pre-defined tasks. As an example, a collective may benefit in the long term from complexifying its social organisation, even though its ability to address a task remains unchanged in the short term. Additional open issues and future directions will arise from advances in other fields. A relevant recent development is the possibility of evolvable morpho-functional machines that are able to change both their software and hardware features BIB005 and replicate through 3D-printing BIB006 . This would allow embodied evolution holistically to adapt the robots' morphologies as well as their controllers. This can have profound consequences on the nature of embodied evolution that exploit these developments: it would, for instance, enable dynamic population sizes, allowing for more risky behaviour as broken robots could be replaced or recycled.
Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> Our concepts of biology, evolution, and complexity are constrained by having observed only a single instance of life, life on earth. A truly comparative biology is needed to extend these concepts. Because we cannot observe life on other planets, we are left with the alternative of creating Artificial Life forms on earth. I will discuss the approach of inoculating evolution by natural selection into the medium of the digital computer. This is not a physical/chemical medium; it is a logical/informational medium. Thus, these new instances of evolution are not subject to the same physical laws as organic evolution (e.g., the laws of thermodynamics) and exist in what amounts to another universe, governed by the physical laws of the logic of the computer. This exercise gives us a broader perspective on what evolution is and what it does. An evolutionary approach to synthetic biology consists of inoculating the process of evolution by natural selection into an artificial medium. Evolution is then allowed to find the natural forms of living organisms in the artificial medium. These are not models of life, but independent instances of life. This essay is intended to communicate a way of thinking about synthetic biology that leads to a particular approach: to understand and respect the natural form of the artificial medium, to facilitate the process of evolution in generating forms that are adapted to the medium, and to let evolution find forms and processes that naturally exploit the possibilities inherent in the medium. Examples are cited of synthetic biology embedded in the computational medium, where in addition to being an exercise in experimental comparative evolutionary biology, it is also a possible means of harnessing the evolutionary process for the production of complex computer software. <s> BIB001 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> Keywords: Digital Organisms ; Swarm-Bot ; Cooperation ; Controllers ; Environments ; Locomotion ; Navigation ; Avoidance ; Sensors ; Walking ; Evolutionary Robotics Note: communication Reference LIS-ARTICLE-2010-003doi:10.1371/journal.pbio.1000292View record in Web of Science Record created on 2010-02-02, modified on 2017-05-10 <s> BIB002 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> This paper is concerned with a fixed-size population of autonomous agents facing unknown, possibly changing, environments. The motivation is to design an embodied evolutionary algorithm that can cope with the implicit fitness function hidden in the environment so as to provide adaptation in the long run at the level of the population. The proposed algorithm, termed mEDEA, is shown to be both efficient in unknown environment and robust with regards to abrupt, unpredicted, and possibly lethal changes in the environment. <s> BIB003 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> The evolution of altruism is a fundamental and enduring puzzle in biology. In a seminal paper Hamilton showed that altruism can be selected for when rb - c > 0, where c is the fitness cost to the altruist, b is the fitness benefit to the beneficiary, and r is their genetic relatedness. While many studies have provided qualitative support for Hamilton's rule, quantitative tests have not yet been possible due to the difficulty of quantifying the costs and benefits of helping acts. Here we use a simulated system of foraging robots to experimentally manipulate the costs and benefits of helping and determine the conditions under which altruism evolves. By conducting experimental evolution over hundreds of generations of selection in populations with different c/b ratios, we show that Hamilton's rule always accurately predicts the minimum relatedness necessary for altruism to evolve. This high accuracy is remarkable given the presence of pleiotropic and epistatic effects as well as mutations with strong effects on behavior and fitness (effects not directly taken into account in Hamilton's original 1964 rule). In addition to providing the first quantitative test of Hamilton's rule in a system with a complex mapping between genotype and phenotype, these experiments demonstrate the wide applicability of kin selection theory. <s> BIB004 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> Evolutionary novelties have been important in the history of life, but their origins are usually difficult to examine in detail. We previously described the evolution of a novel trait, aerobic citrate utilization (Cit 1 ), in an experimental population of Escherichia coli. Here we analyse genome sequences to investigate the history and genetic basis of this trait. At least three distinct clades coexisted for more than 10,000 generations before its emergence. The Cit 1 trait originated in one clade by a tandem duplication that captured an aerobically expressed promoter for the expression of a previously silent citrate transporter. The clades varied in their propensity to evolve this novel trait, although genotypes able to do so existed in all three clades, implying that multiple potentiating mutations arose during the population’s history. Our findings illustrate the importance of promoter capture and altered gene regulation in mediating the exaptation events that often underlie evolutionary innovations. <s> BIB005 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> What happens when we let robots play the game of life? The challenge of studying evolution is that the history of life is buried in the past-we can't witness the dramatic events that shaped the adaptations we see today. But biorobotics expert John Long has found an ingenious way to overcome this problem: he creates robots that look and behave like extinct animals, subjects them to evolutionary pressures, lets them compete for mates and resources, and mutates their 'genes'. In short, he lets robots play the game of life. In Darwin's Devices, Long tells the story of these evolving biorobots-how they came to be, and what they can teach us about the biology of living and extinct species. Evolving biorobots can replicate creatures that disappeared from the earth long ago, showing us in real time what happens in the face of unexpected environmental challenges. Biomechanically correct models of backbones functioning as part of an autonomous robot, for example, can help us understand why the first vertebrates evolved them. But the most impressive feature of these robots, as Long shows, is their ability to illustrate the power of evolution to solve difficult technological challenges autonomously-without human input regarding what a workable solution might be. Even a simple robot can create complex behavior, often learning or evolving greater intelligence than humans could possibly program. This remarkable idea could forever alter the face of engineering, design, and even warfare. An amazing tour through the workings of a fertile mind, Darwin's Devices will make you rethink everything you thought you knew about evolution, robot intelligence, and life itself. <s> BIB006 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> Experimental studies of evolution have increased greatly in number in recent years, stimulated by the growing power of genomic tools. However, organismal fitness remains the ultimate metric for interpreting these experiments, and the dynamics of fitness remain poorly understood over long time scales. Here, we examine fitness trajectories for 12 Escherichia coli populations during 50,000 generations. Mean fitness appears to increase without bound, consistent with a power law. We also derive this power-law relation theoretically by incorporating clonal interference and diminishing-returns epistasis into a dynamical model of changes in mean fitness over time. <s> BIB007 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> A major challenge in studying social behaviour stems from the need to disentangle the behaviour of each individual from the resulting collective. One way to overcome this problem is to construct a model of the behaviour of an individual, and observe whether combining many such individuals leads to the predicted outcome. This can be achieved by using robots. In this review we discuss the strengths and weaknesses of such an approach for studies of social behaviour. We find that robots—whether studied in groups of simulated or physical robots, or used to infiltrate and manipulate groups of living organisms—have important advantages over conventional individual-based models and have contributed greatly to the study of social behaviour. In particular, robots have increased our understanding of self-organization and the evolution of cooperative behaviour and communication. However, the resulting findings have not had the desired impact on the biological community. We suggest reasons for why this may be the case, and how the benefits of using robots can be maximized in future research on social behaviour. <s> BIB008 </s> Embodied Evolution in Collective Robotics: A Review <s> Evolutionary biology <s> Mutualistic cooperation often requires multiple individuals to behave in a coordinated fashion. Hence, while the evolutionary stability of mutualistic cooperation poses no particular theoretical difficulty, its evolutionary emergence faces a chicken and egg problem: an individual cannot benefit from cooperating unless other individuals already do so. Here, we use evolutionary robotic simulations to study the consequences of this problem for the evolution of cooperation. In contrast with standard game-theoretic results, we find that the transition from solitary to cooperative strategies is very unlikely, whether interacting individuals are genetically related (cooperation evolves in 20% of all simulations) or unrelated (only 3% of all simulations). We also observe that successful cooperation between individuals requires the evolution of a specific and rather complex behaviour. This behavioural complexity creates a large fitness valley between solitary and cooperative strategies, making the evolutionary transition difficult. These results reveal the need for research on biological mechanisms which may facilitate this transition. <s> BIB009
In the last 100 years, evolutionary biology benefited from both experimental and theoretical advances. It is now possible, for instance, to study evolutionary mechanisms through methods such as gene sequencing BIB005 BIB007 . However, in vitro experimental evolution has its own limitations: with evolution in "real" substrates, the time-scales involved limit the applicability to relatively simple organisms such as E.coli bacteria. From a theoretical point of view, population genetics (see for a recent introduction) provides a set of mathematically grounded tools for understanding evolution dynamics, at the cost of many simplifying assumptions. Evolutionary robotics has recently gained relevance as an individual-based modelling and simulation method in evolutionary biology BIB002 BIB004 BIB006 BIB008 BIB009 , enabling the study of evolution in populations of robotic individuals in the physical world. Embodied evolution enables more accurate models of evolution because it is possible to embody not only the physical interactions, but also the evolutionary operators themselves BIB003 . Synthetic approach Embodied evolution can also be used to "understand by design" . As Maynard Smith nicely puts it 1992 (originally referring to Tierra BIB001 ): "so far, we have been able to study only one evolving system and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalisations about evolving systems, we have to look at artificial ones." This synthetic approach stands somewhere between biology and engineering, using tools from the latter to understand mechanisms originally observed in nature, and aiming at identifying general principles not confined to any particular (biological) substrate. Beyond improving our understanding of adaptive mechanisms, these general principles can also be used to improve our ability to design complex systems.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> The Letter derives and verifies the concept of antenna focusing by planar ‘microstrip reflectarray’ antenna. Experimental results from schematically distributed patch radiators with microstrip delay lines are consistent with the simulation results. The advantages and disadvantages of such an antenna are described. The measured overall antenna efficiency is 48% at scan angles up to 30°. These results demonstrate the feasibility of such an antenna. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> Microstrip reflectarrays typically use tuning stubs on each element to adjust the phase of the reflected field. The authors describe a new approach in which the need for tuning stubs is eliminated and phase control is achieved simply by adjusting the resonant length of the patch elements. The advantages of this approach are described, as are a full-wave analysis technique for computing the phase of the reflected field as a function of patch size and a design curve giving the change in patch size for a desired reflected field phase shift. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> This paper presents the study and prototype demonstration of the concepts of antenna focusing, cross-polarization reduction, and multiple-polarization capability of a planar microstrip reflectarray antenna. A square patch with two equal microstrip delay lines connected to its two orthogonal feeding points is used as the antenna element of the planar reflectarray. The length of the delay lines, which varies from patch to patch, is schematically designed to focus the plane wave to the feed point. The measured overall efficiency of the prototype antenna is above 50% in the normal operating band, and in some frequency ranges the efficiency has reached 70%. The experimental patterns show that the measured cross polarization due to special arrangement of the delay lines is quite low at the direction of the main beam. X-polarization and Y-polarization measured data show that the antenna is suitable for multiple-polarization applications (dual linear and dual circular). Surprisingly, the antenna has achieved approximately greater than 7% of gain bandwidth (-3 dB gain drop). These results demonstrate the feasibility of such an antenna for radar and communication system applications. > <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> This paper discusses the theoretical modeling and practical design of millimeter wave reflectarrays using microstrip patch elements of variable size. A full-wave treatment of plane wave reflection from a uniform infinite array of microstrip patches is described and used to generate the required patch-design data and to calculate the radiation patterns of the reflectarray. The critical parameters of millimeter wave reflectarray design, such as aperture efficiency, phase errors, losses, and bandwidth are also discussed. Several reflectarray feeding techniques are described, and measurements from four reflectarray design examples at 28 and 77 GHz are presented. <s> BIB004 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> This paper demonstrates a novel means of achieving cophasal far-field radiation for a circularly polarized microstrip reflectarray with elements having variable rotation angles. Two Ka-band half-meter microstrip reflectarrays have been fabricated and tested. Both are believed to be the electrically largest reflectarrays ever developed using microstrip patches. One, a conventional design, has identical square patches with variable-length microstrip phase-delay lines attached. The other has identical square patches with identical microstrip phase-delay lines but different element rotation angles. Both antennas demonstrated excellent performance with more than 55% aperture efficiencies, but the one with variable rotation angles resulted in better overall performance. A brief mathematical analysis is presented to validate this "rotational element" approach. With this approach, a means of scanning the main beam of the reflectarray over a wide angular region without any RF beamformer by using miniature or micromachined motors is viable. <s> BIB005 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> A perturbation technique is described to investigate numerically mutual coupling in reflectarrays. We first define it and then demonstrate that the corresponding perturbation factor has the same meaning as mutual coupling within traditional phased array antennas. Finally the field perturbation approach is applied to a two-element dielectric resonator antenna (DRA) array using finite-difference time-domain (FDTD) modeling. Simulation results show that the proposed method can be used to efficiently analyze mutual coupling in real reflectarray configurations where ports do not exist and cells dimensions are not identical by nature. <s> BIB006 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> A novel structure using a variable microstrip ring with a slot ring on part of the ground plane is proposed as an auxiliary unit cell in a reflectarray. A progressive phase range larger than 682° has been achieved by changing the sizes of the microstrip rings and adding slot rings on part of the ground plane. A K-band reflectarray has been fabricated and shows very good performance. <s> BIB007 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> This paper presents a detailed investigation on the phase agility of reflectarray antennas designed in the X-band frequency range. A novel technique for the analysis of the required reflection phase from individual reflectarray elements to form a planar wavefront of the periodic aperture is presented. Various slot configurations embedded in the patch elements of reflectarrays are also proposed for the performance improvement of reflectarray antennas. The feasibility of using these slot configurations for frequency tunable reflectarrays and designing periodic structure of slotted patch elements are also demonstrated. The designed reflectarray antenna with attainable frequency tunability of 1700MHz, demonstrated that a maximum dynamic phase range of 320° and a volume reduction of up to 24.36% are achieved at 10 GHz. <s> BIB008 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> Electronically tunable reflectarrays and amplifying reflectarrays have attracted considerable research in recent years, however, little work has been done to combine these two features simultaneously in a single design. This paper focuses on the design of such a reflectarray, which can be used as a high gain, reconfigurable transmitting antenna for communication links. The reflectarray element is an aperture-coupled microstrip patch that accepts a linearly polarized wave, phase shifts and amplifies the guided-waves in the transmission lines, and then re-radiates an orthogonally-polarized wave. First, the element design, modelling, stability analysis and experimental results are presented. Then a 48 element reflectarray prototype operating at 5.7 GHz is described and its two dimensional beam steering capability and amplifying nature are successfully demonstrated and verified. <s> BIB009 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> This paper presents an accurate analysis of different configurations of reflectarray resonant elements that can be used for the design of passive and tunable reflectarrays. Reflection loss and bandwidth performances of these reflectarray elements have been analyzed in the X-band frequency range with the Finite Integral Method technique, and the results have been verified by the waveguide scattering parameter measurements. The results demonstrate a reduction in the phase errors offering an increased static linear phase range of 225° which allows to improve the bandwidth performance of single layer reflectarray antenna. Moreover a maximum dynamic phase range of 320° and a volume reduction of 22.15% have been demonstrated for a 10 GHz reflectarray element based on the use of rectangular patch with an embedded circular slot. <s> BIB010 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. <s> BIB011 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. <s> BIB012 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> Abstract This paper presents a thorough investigation of the relationship between reflection loss and dynamic phase distribution performance of three different reflectarray resonant elements. The tunability characteristics of rectangular, dipole and ring elements printed on grounded non-linear dielectric anisotropic substrates have been investigated at X-band frequency range using CST computer model. A detailed analysis of reflection loss and dynamic phase range, with respect to dielectric anisotropy is presented for different anisotropic liquid crystal substrate materials. Preliminary analysis results show that ring element offers the highest reflection loss and dynamic phase range of 56.54 dB and 248° respectively compared to rectangular element which offers 10.74 dB and 90° respectively. Furthermore the rectangular element attains a maximum frequency tunability of 796 MHz compared to ring element which attains 716 MHz. Moreover it has also been shown that an increase in dielectric anisotropy of non-linear materials affect dynamic phase ranges and frequency tunability of three resonant elements. <s> BIB013 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. <s> BIB014 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> A graphene patch reflectarray is proposed for the first time to generate THz wave with orbital angular momentum (OAM). It is shown that by properly assigning the graphene patches with specific reflection coefficients, an OAM-carrying beam can be generated. The graphene reflectarray, which consists of 8 regions and each region contains about 170 patches with uniform chemical potential and size, has successfully produced the OAM-carrying beam of either 1 or −1 mode. <s> BIB015 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> I. INTRODUCTION <s> A reconfigurable graphene reflectarray is proposed for the generation of vortex radio waves at THz. First, a simple sectored circular reflective surface model with a plane wave at normal incidence is constructed to illustrate how vortex radio waves can be generated. Then, a graphene reflective cell is examined to demonstrate that the reflection coefficient can be controlled by changing the chemical potential and size of the graphene patch. Next, the sectored circular reflective surface is realized with the graphene reflective cells that are properly sized, arranged, and biased to satisfy the required reflection coefficients for various modes of vortex radio waves. Finally, the graphene reflectarray is excited with a horn antenna, showing from simulations that it can be dynamically reconfigured to generate the 0, $\pm 1$ , and $\pm 2$ modes of vortex radio waves at 1.6 THz. <s> BIB016
The high data rate precision in the order of Gbps is required for current communication systems to evolve into future 5th Generation (5G) technology. These high data rates will mainly be supported by fast switching mechanisms which are possible to achieve at short wavelengths of millimeter waves. Additionally, the enhanced bandwidth and efficiency features of antenna systems are also required to meet the high data rate requirements BIB014 . The stated antenna features are also attainable at mm-wave frequencies. Considering the importance of mm-waves BIB011 , recently in World Radiocommunication Conference (WRC-15) 5G frequency bands were allocated on a primary basis for possible future developments . Various frequency bands between the frequencies of 24.25 GHz and 86 GHz were proposed for 5G communication systems. However the plausible operation of 5G at lower frequencies was also not completely neglected. The main challenge associated with mm-wave propagation is its short communication distance with high path loss BIB012 . A suitably selected 5G antenna can overcome these propagation issues related with mm-waves. The two dimensional array antennas with large electrical apertures and narrow beamwidths are good candidates for 5G operation BIB012 . The large electrical apertures just marginally affect the physical profile of the antenna due to short wavelengths of mm-waves. As depicted in Figure 1 , the main architecture of reflectarray antenna consists of an array of radiating elements on a flat surface to reflect the incident signals coming from a properly distant feed . Plane and light weighted reflectarray antenna can perform the reflection of the incident signals like a parabolic reflector with additional feature of beam scanning. Unlike phased arrays, the beam scanning reflectarrays can work without the aid of any phase shifter or power divider . The bulky and curvy design of parabolic antenna is not a good candidate for high frequency applications . On the other hand, a reflectarray antenna can readily be designed at frequencies ranging from Microwave , BIB008 to Terahertz BIB015 - BIB016 . The adaptability of reflectarray to high frequencies makes it suitable for high gain and high bandwidth operation. The design procedures of reflectarray play an important role in its performance improvement . In relation to its design architecture a reflectarray antenna can offer different expected outputs, such as a narrow reflecting element is good for wide phase range, but at the cost of high loss performance BIB009 , BIB013 . The basic architecture of a microstrip reflectarray antenna with square patches is shown in Figure 1 . The reflectarray antenna can be analyzed by a full wave technique BIB002 considering its resonant element as a unit cell, as depicted in Figure 1 . The mutual coupling effect of surrounding elements , BIB006 can be taken into account by putting proper boundary conditions. The main performance parameters of the unit cell element are its reflection loss, reflection phase and beamwidth . The bandwidth of a reflectarray depends on its reflection loss and reflection phase performance. Alternatively, its gain can be handled by its size which is governed by the beamwidth of the unit cell elements. As shown in Figure 1 , a wide beamwidth is required by a corner element of reflectarray to properly accumulate the incident signals from a distant feed. A progressive change in the reflection phase on a reflectarray occurs as the distance increases between a selected and the reference element with normal incidence. Therefore a proper progressive phase distribution is required for each element on a reflectarray to acquire high gain performance in a required direction BIB007 . These reflection phase variations can be achieved by elements with variable size BIB004 , elements with variable rotation angle BIB005 , the length of the stub attached to the elements BIB003 and by same size elements with variable slots BIB010 . The distance of the feed (f ) from reflectarray defines the angle of incidence it carries to an element. The proper accumulation of corner elements can be ensured with a large feed distance, but it also increases the antenna profile and produces spillover losses. The f /D ratio defines the feed distance where D is the longest dimension of reflectarray. The offset feed technique BIB001 in reflectarrays can be selected to avoid the feed shadow created due to a center feed.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> A class of antennas that utilizes arrays of elementary antennas as reflecting surfaces has been investigated. An antenna of this type is here called a Reflectarray. It has been found that the Reflectarray combines much of the simplicity of the reflector-type antenna with the performance versatility of the array type. The reflecting surfaces employed in these antennas are characterized by a surface impedance that can be synthesized to produce a variety of radiation patterns. The equations of the surface impedance as a function of the desired reflected phase front is derived for the lossless case and methods of realizing this surface impedance are presented. Experimental results of a waveguide array type Reflectarray are given including pencil beam, broad beam and scanning modes. Data on the effects of specific phase errors are presented. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> This paper presents the design of a Ka-Band dielectric resonator antenna (DRA) reflectarray working at 30 GHz. The unit-cell is based on a DRA loaded by a metallic strip printed on the top surface. By varying the strip length, the desired phase variation is achieved. The DRA reflectarray consists of 24×24 unit-cells and is compatible with monolithic fabrication. A full-wave FDTD modelling is performed to simulate the whole antenna and validate a simplified calculation that can be used to predict the antenna performance. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> The design and implementation of a reflectarray with a single-layer perforated dielectric substrate is presented. The perforated dielectric layer, used as the reflecting surface of the proposed antenna, is realized by drilling air holes with different diameters on the dielectric substrate. Thus, the effective permittivity of the dielectric substrate is altered by drilling holes with different diameters, and then the substrate is equivalent to an inhomogeneous dielectric layer that can be controlled by these holes to collimate the reflected waves in the special direction. The reflectarray is composed of 29 × 29 elements, and it covers an area of 24.65 × 24.65 cm2. Varying the hole's diameter can lead to 360 phase shift. The reflectarray is offset fed by a linearly polarized pyramidal horn antenna. Full-wave analysis software CST Microwave Studio, which is based on finite integration technique, is applied. The results are validated by measured results. The 3-dB beamwidth is 2.7°. A peak gain of 35.5 dB is predicted at 30 GHz. The reflectarray can cover Ka-band with sidelobe level below -20 dB for both E-plane and H-plane. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> A novel metal-only reflectarray antenna is proposed in this letter. By using a unique unified slot structure, the dielectric substrate commonly applied in conventional reflectarray antennas can be avoided. Various slot elements are investigated, and a prototype reflectarray antenna working at 12.5 GHz is studied for experimental verification. The simulation and measurement results show that good radiation characteristics are achieved by the proposed design. The measured gain is 32.5 dB with 1-dB gain bandwidth of 8.3%, which is comparable to reflectarrays consisting of conventional patch elements. The metal-only structure provides an innovative reflectarray configuration to better withstand the extreme outer space environment and effectively reduce the antenna cost. <s> BIB004 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> A reflectarray element using a conductor cell with variable height is proposed. It is found that the reflection phase can be tuned by adjusting the height of the conductor cells arranged on the spatially discretised reflector plane. A complete linearised reflection phase thus can be achieved when the proposed conductor cell is used as a unit element. A millimetre-wave antenna has been designed using the proposed method and it shows 50% aperture efficiency with a half-wavelength cell height variation. <s> BIB005 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. REFLECTARRAY STRUCTURES <s> In this paper millimeter-wave reflectarray research, simulation and measurements results are presented. Waveguide elements were used as reflectarray unit cell. Exponential horn was used as illuminator. Low profile reflectarray transforms wavefront like a conventional parabolic reflector. The reflectarray aperture, supporting structure and illuminator were fabricated from aluminum alloy by CNC machine. Measured results show very good agreement with characteristics obtained from the simulations. Manufactured reflectarray provides 50% efficiency in 10% bandwidth. <s> BIB006
The nature and the type of the elements of a reflectarray antenna also defines its performance characteristics based on its classification. The response of the reflectarray antenna mainly depends on the type of material used to develop its resonant elements. Figure 2 classifies four different types of commonly used reflectarray antennas for performance improvement. A dielectric reflectarray is used to remove the conductor losses from its resonant behavior BIB003 . Its most common type is Dielectric Resonator Antenna (DRA) reflectarray , BIB002 . The same tactic can be applied to create a full conductor based reflectarray BIB004 . It can improve the gain performance by eliminating the dielectric loss effects, especially at millimeter wave frequencies BIB005 . Another type of metallic reflectarray is the variable depth waveguide reflectarray BIB006 . Its progressive phase distribution is associated with the lengths of its waveguide elements BIB001 . The most common type of a reflectarray is microstrip reflectarray . It provides the best variety of design diversity by combining the conducting and dielectric features together. Electronic beamsteering is the main advantage this type holds over other types. The pros and cons of each type associated with the gain and efficiency improvement will be discussed later in coming sections.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> A technique for the measurement of scattering parameters ::: of an infinite reflectarray is presented which takes into account effects of mutual coupling. Infinite reflectarrays designed at 10 GHz with different substrate thicknesses have been analyzed in which maximum 10% and 20% bandwidth values from resonance are shown to be 157 MHZ and ::: 250 MHz, respectively, constructed on 0.508-mm thick substrate.Moreover, a figure of merit (FOM) has been defined which decreases from 0.320 to 0.26/MHz as the substrate thickness is increased from 0.127 to 0.508 mm, and the results demonstrate a significant decrease in the slope of reflection phase curve. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> Multi-antenna technologies such as beamforming and Multiple-Input, Multiple-Output (MIMO) are anticipated to play a key role in “5G” systems, which are expected to be deployed in the year 2020 and beyond. With a class of 5G systems expected to be deployed in both cm-wave (3-30 GHz) and mm-wave (30-300 GHz) bands, the unique characteristics and challenges of those bands have prompted a revisiting of the design and performance tradeoffs associated with existing multi-antenna techniques in order to determine the preferred framework for deploying MIMO technology in 5G systems. In this paper, we discuss key implementation issues surrounding the deployment of transmit MIMO processing for 5G systems. We describe MIMO architectures where the transmit MIMO processing is implemented at baseband, RF, and a combination of RF and baseband (a hybrid approach). We focus on the performance and implementation issues surrounding several candidate techniques for multi-user-MIMO (MU-MIMO) transmission in the mm-wave bands. <s> BIB004 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> In this paper, a new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for future fifth generation (5G) cellular networks is presented. This array antenna is proposed and designed with a standard printed circuit board process to be suitable for integration with radio frequency/microwave circuitry. The proposed structure employs four circular-shaped DD patch radiator antenna elements fed by a 1-to-4 Wilkinson power divider. To improve the array radiation characteristics, a ground structure based on a compact uniplanar electromagnetic bandgap unit cell has been used. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The measured impedance bandwidth of the proposed array antenna ranges from 27 to beyond 32 GHz for a reflection coefficient (S11) of less than -10 dB. The proposed design exhibits stable radiation patterns over the whole frequency band of interest, with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications. <s> BIB005 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> A circularly polarized patch antenna for future fifth-generation mobile phones is presented in this paper. Miniaturization and beamwidth enhancement of a patch antenna are the two main areas to be discussed. By folding the edge of the radiating patch with loading slots, the size of the patch antenna is 44.8% smaller than a conventional half wavelength patch, which allows it to be accommodated inside handsets easily. Wide beamwidth is obtained by surrounding the patch with a dielectric substrate and supporting the antenna by a metallic block. A measured half power beamwidth of 124° is achieved. The impedance bandwidth of the antenna is over 10%, and the 3-dB axial ratio bandwidth is 3.05%. The proposed antenna covers a wide elevation angle and complete azimuth range. A parametric study of the effect of the metallic block and the surrounding dielectric substrate on the gain at a low elevation angle and the axial ratio of the proposed antenna are presented. <s> BIB006 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage. <s> BIB007 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> This paper presents an adaptive beam-steering antenna for non-contact vital sign radar system at 5.8 GHz. A 2X2 microstrip patch antenna and two phase shifters are manufactured on the same board. Phase shifters consisting of 4 single pole double throw (SPDT) switches are integrated into the matching network of the phase array antenna to save the area. Beam-steering function is controlled by the active phase shifter. The DC bias circuit of the phase shifter is connected with the back side of the antenna through headers. Both simulation and measurement show an antenna bandwidth of 200 MHz centered at 5.8 GHz, which is suitable for a 5.8 GHz non-contact vital sign radar system. The radiation beam of this antenna can steer from -22° to 22° in H-plane, which increases the antenna coverage to 85° without sacrificing antenna gain. Measurement shows the adaptive beam-steering antenna can successfully detect human vital signs while the fixed-beam 2×2 antenna fails due to the limited and fixed antenna beam coverage. The improved performance without increasing antenna area makes the adaptive beam-steering antenna more suitable for commercial application. <s> BIB008 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> Advances in reflectarrays and array lenses with electronic beam-forming capabilities are enabling a host of new possibilities for these high-performance, low-cost antenna architectures. This paper reviews enabling technologies and topologies of reconfigurable reflectarray and array lens designs, and surveys a range of experimental implementations and achievements that have been made in this area in recent years. The paper describes the fundamental design approaches employed in realizing reconfigurable designs, and explores advanced capabilities of these nascent architectures, such as multi-band operation, polarization manipulation, frequency agility, and amplification. Finally, the paper concludes by discussing future challenges and possibilities for these antennas. <s> BIB009 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> This article presents a simple method to realize polarization diversity in broadband reflectarrays. The wideband characteristic of the reflectarray is achieved using rectangular patch elements arranged in a subwavelength grid on a single layer of substrate; while the polarization diversity of the reflectarray is obtained by simply rotating the feed horn antenna. As the two orthogonal x- and y-component of the rectangular patch element demonstrate relatively negligible interaction, the circular polarized (CP) reflectarrays with linearly polarized feed that have been proposed previously are found to be capable of supporting quadruple polarizations by means of rotating the feed. Based on the rectangular patch elements, an offset-fed 405 × 405 mm2 reflectarray with 0.3λ grid and centered at 10 GHz is designed and developed for right-hand circular polarization (RHCP). In an effort to realize the polarization diversity, the feed horn antenna is subsequently rotated relative to the array with angles of 0°, 90°, and 135° for vertical polarization, horizontal polarization, and left-hand circular polarization (LHCP), respectively. The viability and effectiveness of the proposed simple method for polarization diversity is experimentally verified. The measured results show that the 1-dB gain bandwidth for all four polarizations can reach as large as 18%. Furthermore, the 3-dB axial ratio bandwidths for CP operations are remarkably wide, above 36% and 40% for RHCP and LHCP, respectively. © 2015 Wiley Periodicals, Inc. Microwave Opt Technol Lett 57:305–310, 2015 <s> BIB010 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> Wireless systems increasingly rely on the accurate knowledge at the transmitter side of the transmitter-to-receiver propagation channel, to optimize the transmission adaptively. Some candidate techniques for 5th generation networks need the channel knowledge for tens of antennas to perform adaptive beamforming from the base station towards the mobile terminal. These techniques reduce the radiated power and the energy consumption of the base station. Unfortunately, they fail to deliver the targeted quality of service to fast moving terminals such as connected vehicles. Indeed, due to the movement of the vehicle during the delay between channel estimation and data transmission, the channel estimate is outdated. In this paper, we propose three new schemes that exploit the ?Predictor Antenna? concept. This recent concept is based on the observation that the position occupied by one antenna at the front of the vehicle, will later on be occupied by another antenna at the back. Estimating the channel of the ?front? antenna can therefore later help beamforming towards the ?back? antenna. Simulations show that our proposed schemes make adaptive beamforming work for vehicles moving at speeds up to 300 km/h. <s> BIB011 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> A hybrid antenna is proposed for future 4G/5G multiple input multiple output (MIMO) applications. The proposed antenna is composed of two antenna modules, namely, 4G antenna module and 5G antenna module. The 4G antenna module is a two-antenna array capable of covering the GSM850/900/1800/1900, UMTS2100, and LTE2300/2500 operating bands, while the 5G antenna module is an eight-antenna array operating in the 3.5-GHz band capable of covering the $C$ -band (3400–3600 MHz), which could meet the demand of future 5G application. Compared with ideal uncorrelated antennas in an $8 \times 8$ MIMO system, the 5G antenna module has shown good ergodic channel capacity of $\sim 40$ b/s/Hz, which is only 6 b/s/Hz lower than ideal case. This multi-mode hybrid antenna is fabricated, and typically, experimental results such as S-parameter, antenna efficiency, radiation pattern, and envelope correlation coefficient are presented. <s> BIB012 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. ANTENNAS IN 5G COMMUNICATIONS <s> The advancement in the current communication technology makes it incumbent to analyze the conventional features of reflectarray antenna for future adaptability. This paper thoroughly reviews the design and experimental features of reflectarray antenna for its bandwidth improvement in microwave and millimeter wave frequency ranges. The paper surveys the fundamental and advanced topologies of reflectarray design implementations, which are needed particularly for its broadband features. The realization of its design approaches has been studied at unit cell and full reflectarray levels for its bandwidth enhancement. Various design configurations have also been critically analyzed for the compatibility with the high-frequency 5G systems. <s> BIB013
The main candidates for 5G communications are massive MIMO systems due to their possible integrity with small cells BIB003 , BIB002 , BIB004 , . However, their design complexity and less adaptability with shorter wavelengths restrict them to attain the level of array antennas. Some types of antennas have already been recommended in BIB005 - BIB012 for 5G operation. However, enhancing only the bandwidth of proposed antenna does not solve all issues regarding 5G compatibility BIB006 . Some other important parameters like gain, efficiency, polarization diversity and adaptive beamsteering also require significant improvements BIB002 , BIB007 - BIB010 . An improved gain performance can ensure the strong transmission capabilities for antenna BIB007 . In the case of 5G when antenna systems are required to work at mm-waves, their communication distances significantly decrease due to short wavelengths. In this case a high gain antenna can radically improve the path loss performance BIB005 , BIB008 without disturbing its original power consumption. A high aperture efficiency of antenna systems ensures the best utilization of maximum gain value for the reduction of path loss BIB011 . On the other hand, data rate can also be increased by enhancing the spectral efficiency of antenna systems BIB003 . The nearest possible competitor of reflectarray antenna for 5G is phased array antenna. However, the main problem associated with phased array is its efficiency lacking at millimeter wave frequencies due to its additional loss performance BIB009 . Moreover, its design complexity and power consumption are also major issues at millimeter wave frequencies. On the other hand, the discussed antenna parameters for possible 5G application are inevitable with reflectarray antenna. Its bandwidth is a function of its unit cell design and substrate thickness BIB001 . Its gain can be improved by increasing its physical aperture to produce sharp beams . Its efficiency depends on its loss performance and feeding mechanism used for the operation. Different design configuration of patch elements can be utilized for various polarization combinations. The adaptive beamsteering can be achieved by dynamically tuning its reflection phase response . A vast variety of works can be found in the literature for the improvement of each parameter needed for 5G compatible reflectarray. A detailed review about the broadband features of the reflectarray antenna for 5G communications has already been presented in BIB013 . It has been mentioned in this article that, the bandwidth of a reflectarray can be improved by creating extra resonances in the same structure. It can be done by designing multi-resonance or dual band designs. A fractal element is the best example for multiresonance design while it can also be done by the combination of two or more elements on a same surface. Design complexity increases with the frequency while mutual coupling could also be an issue for combination of elements. On the other hand, dual band designs can also be constructed on dual layers, representing their separate resonances. However, the attachment of two layers is a difficult task to perform at shorter wavelengths. A rather easy way to increase the bandwidth of a reflectarray is by increasing its reflection phase range by putting an extra phase tuning stub to its elements. But if not handled properly, the extra tuning stubs can produce leakage currents which can alter the polarization of the reflected signal. The detailed information regarding each bandwidth enhancement technique, their possible issues and solutions to counter those issues can be found in BIB013 . In this work, the emphasis has been given specially on the design configuration needed for reflectarray gain and efficiency enhancement. Some selected works at microwave and millimeter wave frequencies have been taken into account for the detailed analysis. The analysis of design techniques has been categorized in the unit cell and full reflectarray designs. Section II comprises high gain approaches in reflectarrays by explaining the importance of different design mechanisms for its performance improvement. Section III contains the information regarding techniques for high efficiency reflectarrays.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> II. HIGH GAIN REFLECTARRAY DESIGN TECHNIQUES <s> This paper discusses the theoretical modeling and practical design of millimeter wave reflectarrays using microstrip patch elements of variable size. A full-wave treatment of plane wave reflection from a uniform infinite array of microstrip patches is described and used to generate the required patch-design data and to calculate the radiation patterns of the reflectarray. The critical parameters of millimeter wave reflectarray design, such as aperture efficiency, phase errors, losses, and bandwidth are also discussed. Several reflectarray feeding techniques are described, and measurements from four reflectarray design examples at 28 and 77 GHz are presented. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> II. HIGH GAIN REFLECTARRAY DESIGN TECHNIQUES <s> This paper revisits the loss phenomenon (particularly, the dielectric loss) for a microstrip patch in reflectarray mode, and discusses the reflection characteristics (magnitude and phase) for a reflectarray element with low- and high-loss substrates. First, the dielectric losses that occur in a lossy slab backed by a perfect electric conductor are both analytically and numerically investigated. Using similar numerical analysis, the reflectarray element (a patch on top of a slab backed by a conductor) is characterized, based on dielectric losses and reflection behavior. It is observed that for low-loss substrates, the dielectric loss decreases with increasing substrate thickness (as previously suggested in the literature). More importantly, for high-loss substrates, the dielectric loss no longer follows the expected trend (decreasing loss with increasing substrate thickness). The dielectric loss becomes a complex phenomenon, involving the dielectric loss tangent and substrate thickness. It is therefore noted that it is important to recognize the well-behaved and misbehaved phase-swing region for high-loss substrates for a reflectarray element. A simple circuit-model representation is provided for the reflectarray element. The anomalous phase behavior observed for high-loss substrates is explained using pole-zero analysis. Waveguide measurements are performed to quantify these reflectarray losses for low- and high-loss substrates. Finally, the loss mechanisms in a patch reflectarray (scattering mode) are compared to a patch antenna (radiation mode), using parameters such as reflection power and radiation efficiency, and similar loss mechanisms for both structures are apparent. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> II. HIGH GAIN REFLECTARRAY DESIGN TECHNIQUES <s> An inflnite re∞ectarray antenna in the X-band frequency range has been designed, and various slot conflgurations have been proposed to optimize the design of reconflgurable re∞ectarray antennas in the X-band frequency range. It has been demonstrated that the introduction of slots in the patch element causes a decrease in the maximum surface current density (J) and electric fleld intensity (E) and hence causes a variation in the resonant frequency of the re∞ectarray. Waveguide simulator technique has been used to represent inflnite re∞ectarrays with a two patch unit cell element and scattering parameter measurements have been carried out using vector network analyzer. A change in resonant frequency from 10 GHz to 8.3GHz has been shown for a slot width of 0.5W (W is the width of patch element) as compared to patch element without slot. Furthermore, a maximum attainable dynamic phase range of 314 - has been achieved by using slots in the patch element constructed on a 0.508mm thick substrate with a maximum surface current density (J) of 113A/m and electric fleld intensity (E) of 14kV/m for 0.5W slot in the patch element. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> II. HIGH GAIN REFLECTARRAY DESIGN TECHNIQUES <s> A dual-offset reflectarray demonstrator has been designed, manufactured and tested for the first time. In the antenna configuration presented in this paper, the feed, the sub-reflectarray and the main-reflectarray are in the near field one to each other, so that the conventional approximations of far field are not suitable for the analysis of this antenna. The antenna is designed by considering the near-field radiated by the horn and the contributions from all the elements in the sub-reflectarray to compute the required phase-shift on each element of the main reflectarray. Both reflectarrays have been designed using broad-band elements based on variable-size patches in a single layer for the main reflectarray and two layers for the sub-reflectarray, incident field. The measured radiation patterns are in good agreement with the simulated results. It is also demonstrated that a reduction of the cross-polarization in the antenna is achieved by adjusting the patch dimensions. The antenna measurements exhibit a 20% bandwidth (12.2 GHz-15 GHz) (with a reduction of gain less than 2.5 dB) and a cross-polar discrimination better than 30 dB in the working frequency band. <s> BIB004
Along with bandwidth, the gain of a reflectarray antenna is also an important factor when a higher data rate with large throughput for wider coverage area is required. The reflectarray is a directional antenna therefore, its gain in a particular direction is normally higher than an ideal isotropic radiator. Its gain depends on its aperture size, a large aperture size is essential for high gain applications. A two dimensional reflectarray with a pencil beam acquires higher gain than a linear reflectarray with fan beam patterns. The spillover and ohmic losses are the main contributors for the degradation of the gain performance. Spillover losses depend on the electric aperture of reflectarray along with the position and the type of the feed used, while the ohmic losses are generated due to the dissipation of energy within the material used in its fabrication. Other factors like element type, element features and its position can also affect the gain of the reflectarray. Element gain and its beamwidth play an important role in gain enhancement. It is highly necessary to have high gain and narrow beam elements in the middle of the array while wide beam elements at the corners for high gain reflectarray. It is because the middle elements can easily accumulate the incoming feed signals due to normal incidence which is not possible for corner elements. However, this may increase the design complexity of the reflectarray antenna. A low side lobe level with negligible level of cross polarization is also essential for high gain value . The gain and efficiency of reflectarray are correlated to each other however in this section the main emphasis has been given to the gain enhancement techniques and approaches regarding efficiency enhancement will be discussed later in the next section. The very common and simple approach to increase the gain of a reflectarray antenna is to increase its aperture size. A large two dimensional aperture can achieve high gain values due to a pointed pencil beam . However a large physical aperture can degrade the performance of the reflectarray antenna in a way that the signals coming from feed may not coincide with the edge elements. This effect has been explained in Figure 3 (a) where feed signal is not reaching the edge elements and reducing the electrical aperture of the reflectarray by causing illumination losses. Increasing the f distance of the feed can somehow eliminate this issue, but a large f distance also has its own consequences. Another issue depicted in Figure 3 (b) where feed signals are exceeding the physical aperture of reflectarray and generating the spillover losses due to the diffracted waves from the edges . The signals coming from the feed are not fully utilized due to the spillover losses. This shows that the illumination and spillover losses are complementary to one another. The beamwidth of the feed antenna can be properly adjusted to control the illumination and spillover losses. Additionally, these losses can also be controlled and gain can be improved by an additional sub-reflector with reflectarray BIB004 . The subreflector can be properly designed to reflect the signals from a feed to exactly pointing them to the aperture of the main reflectarray. This tactic eliminates the drawback of positioning the feed at larger distances from reflectarray to accumulate the physical aperture. However the designing efforts can be doubled with increment in the complexity and cost of the system. Another way to increase the gain of the reflectarray antenna is to decrease its ohmic losses contributed by the conductor and dielectric materials. This can be done at the unit cell level where the reflection loss can be optimized by various design parameters BIB002 - BIB003 . However, low loss and wide bandwidth unit cells can contribute to large phase errors BIB001 , , making it difficult to get low side lobe levels for high gain performance. Some advanced techniques have been discussed in following sections based on the unit cell and full reflectarray level for gain enhancement. The aforementioned basic gain enhancement approaches for reflectarray can further be evolved into various advanced techniques for a single unit cell or a full reflectarray. The gain of a full reflectarray can be governed by giving the emphasis on its unit cell design parameters. The type of its patch elements, dielectric material and its scattering parameters can drastically affect the performance of a full reflectarray. Additionally, a high gain full reflectarray can also be analyzed by its profile, type (full metal, full dielectric or conventional) and feeding mechanism. Various tactics involving the advancement of reflectarray at a unit cell or a full reflectarray level have been discussed in coming sections.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH GAIN REFLECTARRAY OPERATION <s> A novel structure using a variable microstrip ring with a slot ring on part of the ground plane is proposed as an auxiliary unit cell in a reflectarray. A progressive phase range larger than 682° has been achieved by changing the sizes of the microstrip rings and adding slot rings on part of the ground plane. A K-band reflectarray has been fabricated and shows very good performance. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH GAIN REFLECTARRAY OPERATION <s> An architecture to simultaneously affect both amplitude and phase control from a reflectarray element using an impedance transformation unit is demonstrated. It is shown that a wide range of control is possible from a single element, removing the conventional necessity for variable sized elements across an array in order to form a desired reflectarray far-field pattern. Parallel plate waveguide measurements for a 2.2 GHz prototype element validate the phase and amplitude variation available from the element. It is demonstrated that there is sufficient control of the element's reflection response to allow Dolph-Tschebyscheff weighting coefficients for major-lobe to side-lobe ratios of up to 36 dB to be implemented. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH GAIN REFLECTARRAY OPERATION <s> A novel method is introduced to design a single layer, dual-band large printed reflectarray with open loop elements of variable size for both bands. The reflectarray is designed for two frequency bands: 11.4-12.8 GHz for receive and 13.7-14.5 GHz for transmit. Different classes of open cross loop elements were used in the design of the receive band elements. Noting the larger relative bandwidth at the lower band as compared to the upper band, the dimensions of these cross loops are adjusted, using an optimization technique to achieve required phase distribution at the center frequency and minimize frequency dispersion at extreme frequencies of the lower band. Double square open loop elements with variable loop length were used for the transmit band elements. The reflectarray consists of 3 × 3 panels of 40 cm × 40 cm, that are arranged side by side to construct the large 120 cm × 120 cm reflectarray. The flat configuration and modular nature of this reflectarray gives it an advantage from the installation point of view as compared to conventional dish antennas. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> A. DIFFERENT ELEMENTS WITH HIGH GAIN REFLECTARRAY OPERATION <s> The design and implementation of a reflectarray with a single-layer perforated dielectric substrate is presented. The perforated dielectric layer, used as the reflecting surface of the proposed antenna, is realized by drilling air holes with different diameters on the dielectric substrate. Thus, the effective permittivity of the dielectric substrate is altered by drilling holes with different diameters, and then the substrate is equivalent to an inhomogeneous dielectric layer that can be controlled by these holes to collimate the reflected waves in the special direction. The reflectarray is composed of 29 × 29 elements, and it covers an area of 24.65 × 24.65 cm2. Varying the hole's diameter can lead to 360 phase shift. The reflectarray is offset fed by a linearly polarized pyramidal horn antenna. Full-wave analysis software CST Microwave Studio, which is based on finite integration technique, is applied. The results are validated by measured results. The 3-dB beamwidth is 2.7°. A peak gain of 35.5 dB is predicted at 30 GHz. The reflectarray can cover Ka-band with sidelobe level below -20 dB for both E-plane and H-plane. <s> BIB004
The design of a unit cell reflectarray element plays an important role when it comes to the performance improvement. A notable work based on unit cell with a slot in ground plane was presented in BIB001 for the performance improvement of reflectarray antenna. A ring element was used with a same type of slot in the ground plane as shown in Figure 4 (a). The ground slot was used to modify the surface currents and electric fields of the unit cell, due to this modification a higher bandwidth was offered as compared to the unit cell without ground slot. Additionally, the leaky waves generated from the ground slot were also essential to produce a higher gain. A gain of 33.4 dB was obtained at 22 GHz by combining 1056 such elements together on a 24×23 cm reflectarray surface. The ground slot was also responsible for producing a 4 dB higher back radiations than a conventional reflectarray without ground slot. The back radiations were generated due to the discontinuity of the ground plane. In another proposed work, the amplitude and reflection phase of the unit cell reflectarray element was electronically controlled BIB002 to achieve desirable results. The task was performed by an impedance transform unit which was governed by an electronic circuit consisting a varactor diode. The circuit was connected to the square patch element through a coaxial probe as shown in Figure 4 (b). Bias voltage was applied to the circuit to modify the reflection parameters of the unit cell patch element by transforming its impedance. The proposed technique is useful for the designing of a full reflectarray with a desirable gain without the need of the variable size patches. The complexity of the unit cell and its transformation to a full reflectarray is the main issue while working on high frequencies. The gain enhancement strategy can also be performed on single layer dual band reflectarray antenna with two different types of patches. A similar work was proposed in BIB003 where two different open loop elements were selected for transmit (13.7-14.5 GHz) and receive (11.4-12 Figure 4 (c). The gain of 40.6 dB was obtained with a 120×120 cm reflectarray constructed with both elements. This approach shows that, in order to achieve a high gain value with dual band operation a large physical aperture of reflectarray is required. The large size of the reflectarray is due to the compensation of dual frequency operation on a single surface. The ohmic losses of the reflectarray can be reduced and its gain can be increased by removing the metallic patches with drilled holes BIB004 . The proposed work shows the circular holes were drilled in the substrate to make it nonhomogeneous grounded substrate. The unit cell with drilled holes has been shown in Figure 4 (d) where the size of the holes was varied to form a full reflectarray. 2929 such unit cells were combined together to construct the reflectarray antenna for 34.7 dB gain at 30 GHz. This design is suitable to be used at higher frequencies, but drilling the holes in the thin substrate could make it difficult for the micro-level fabrication. However, the number of air holes per unit cell can be reduced by optimizing their performance at different frequencies.
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. FULL REFLECTARRAY BASED TECHNIQUES <s> Benzocyclobutene (BCB) polymer is proposed in this study as an innovative and versatile substrate material for millimetre-wave reflectarrays. The excellent material features in terms of low loss, low dielectric constant and strong dielectric stability against frequency and temperature are highlighted. The design, realisation and test of a 60 GHz reflectarray with 21×21 variable-sized patches giving a focused boresight beam pattern are fully described. The BCB thickness is chosen on the basis of a parametric analysis performed on the unit cell to optimise the performances in terms of dielectric losses and radiation bandwidth. The in-house fabrication process able to assemble multi-layered BCB substrates is described in detail. Experimental results on the radiation pattern and the boresight gain are used to validate both the synthesis and the manufacturing process. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. FULL REFLECTARRAY BASED TECHNIQUES <s> A low cross-polarization center-fed microstrip reflectarray antenna is presented. The cross-polarization elimination is accomplished by a particular arrangement of the elements. Using this technique, a center-fed reflectarray antenna has been designed at center frequency of 11.7 GHz. Measurement results are compared with that of a previously reported reflectarray antenna. Both antennas are exactly the same in every aspect except for the elements arrangement. Measurement results show 1 to 12 dB reduction in cross-polarization level and a notable enhancement in the new antenna's overall gain. <s> BIB002 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. FULL REFLECTARRAY BASED TECHNIQUES <s> A single-layer reflectarray with a combination of element types is proposed. Three element types-square patch, square ring and ring-loaded patch (RLP)-are used together to design the reflectarray by compensating for the drawbacks of one type by the others. The gain of the proposed reflectarray is 29.1 dBi, which is 1.9 dB higher than that of the reference reflectarray that was designed with only the RLP. <s> BIB003 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> B. FULL REFLECTARRAY BASED TECHNIQUES <s> Design and implementation of a dual-band single layer microstrip reflectarray are presented in this communication. The proposed reflectarray operates in two separated broad frequency-bands within X and K bands. Each element in the reflectarray consists of a circular patch with slots, and two phase delay lines attached to the patch. The required phase shifts in X and K bands are obtained by varying the lengths of the phase delay lines. The proposed element has more than 500 and 800 degrees linear phase range within 9.2 ~ 11.2 GHz (X-band) and 21 ~ 23 GHz (K-band), respectively. Measurement results show the maximum gain of 26.2 dB at 10.2 GHz with 16% 1-dB gain bandwidth and 29.7 dB at 22 GHz with 9.1% 1-dB gain bandwidth. With proper arrangement of the elements in the array, the cross-polarization is reduced. The measured efficiency is 47% at 10.2 GHz and 25% at 22 GHz. <s> BIB004
The gain of reflectarray antenna is actually governed and analyzed by the operation of a full reflectarray antenna. The combine effects of patch elements, substrate, feeding strategy and type of reflectarray can significantly control its gain behavior. The effects of unit cell patch element on the gain performance of reflectarray have already been discussed in the previous section. A similar tactic was adopted in BIB003 where the gain of a reflectarray was improved by combining the effects of different elements on a same surface. The reflectarray has been shown in Figure 5 , where a square patch (SP) was used together with a square ring (SR) and ring loaded patch (RLP) at 15 GHz. Through this configuration the gain was improved up to 29.1 dBi, which was 1.9 dB higher than the conventional RLP reflectarray. Additionally, the side lobe level was also improved by 3.8 dB as compared to RLP reflectarray. The properties of the substrate material also have an impact on gain performance. In a notable work BIB001 a substrate material named Benzocyclobutene (BCB) was used for the performance improvement of reflectarray antenna. The selected material had low dielectric losses with strong stability at higher frequencies. A 21×21 reflectarray of square patched was tested with proposed substrate material at 60 GHz for a 29 dB gain. This work shows that, the optimized dielectric losses can significantly improve the gain performance at higher frequencies while sticking with least number of elements. The other aforementioned parameters of gain improvement in reflectarrays are discussed separately below for the clarification of each concept. The unwanted cross polarization reflections of reflectarray antenna can be controlled with various approaches used by researchers. The proper arrangement of the elements on the surface of the reflectarray and its feed mechanism can be used to optimize its cross polarization and hence efficiency performance. Likewise a potential work was reported in BIB002 where the cross polarization of the reflectarray antenna was controlled by the proper arrangement of the elements on its surface. A circular element with two open ended phase tuning stubs was used for this purpose. The elements were arranged in such a way that each element was set by mirroring the design of its adjoining element, as shown in Figure 11 (a). This approach was proposed to reduce the dissimilar reflections from the reflectarray surface and hence enhance its performance in terms of gain and efficiency. A 21×31 reflectarray was tested with the proposed configuration and significant improvement in cross polarization level was achieved compared to conventional arrangement of elements, as listed in Table 3 . It can be observed from Table 3 that, the cross polarization level can be reduced from 1 dB to 12 dB in E and H planes at different frequencies. This approach was also used to enhance the gain performance by 1.3 dB as FIGURE 11. Mirroring of elements for cross-polarization reduction (a) single band design BIB002 (b) dual band design BIB004 .
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 1) REFLECTARRAY WITH A SUB-REFLECTOR <s> The design, construction and measured performance is described of an offset parabolic reflector antenna which employs a reflectarray subreflector to tilt the focused beam from the boresight direction at 94 GHz. An analysis technique based on the method of moments (MoM) is used to design the dual-reflector antenna. Numerical simulations were employed to demonstrate that the high gain pattern of the antenna can be tilted to a predetermined angle by introducing a progressive phase shift across the aperture of the reflectarray. Experimental validation of the approach was made by constructing a 28 times 28 element patch reflectarray which was designed to deflect the beam 5deg from the boresight direction in the azimuth plane. The array was printed on a 115 mum thick metal backed quartz wafer and the radiation patterns of the dual reflector antenna were measured from 92.6-95.5 GHz. The experimental results are used to validate the analysis technique by comparing the radiation patterns and the reduction in the peak gain due to beam deflection from the boresight direction. Moreover the results demonstrate that this design concept can be developed further to create an electronically scanned dual reflector antenna by using a tunable reflectarray subreflector. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 1) REFLECTARRAY WITH A SUB-REFLECTOR <s> A dual-offset reflectarray demonstrator has been designed, manufactured and tested for the first time. In the antenna configuration presented in this paper, the feed, the sub-reflectarray and the main-reflectarray are in the near field one to each other, so that the conventional approximations of far field are not suitable for the analysis of this antenna. The antenna is designed by considering the near-field radiated by the horn and the contributions from all the elements in the sub-reflectarray to compute the required phase-shift on each element of the main reflectarray. Both reflectarrays have been designed using broad-band elements based on variable-size patches in a single layer for the main reflectarray and two layers for the sub-reflectarray, incident field. The measured radiation patterns are in good agreement with the simulated results. It is also demonstrated that a reduction of the cross-polarization in the antenna is achieved by adjusting the patch dimensions. The antenna measurements exhibit a 20% bandwidth (12.2 GHz-15 GHz) (with a reduction of gain less than 2.5 dB) and a cross-polar discrimination better than 30 dB in the working frequency band. <s> BIB002
As earlier explained before the sub-reflector with a main reflectarray is used mainly to compensate the losses generated from conventional feed horn antenna during the illumination. These losses have an immense impact on the gain performance of the reflectarray antenna while controlling its cross polarization and side lobe level performances. A dual reflectarray antenna was recommended in BIB002 for the stated advantages where one reflectarray served as a sub-reflector for the other main reflector. The design architecture of the proposed reflectarray antenna has been shown in Figure 6 (a). Figure 6 (a) depicts that, both reflectors were placed in nearfield to each other by eliminating the effects of far-field of sub-reflector. Conventional square patches were used for the fabrication of both reflectarrays. The sub-reflector was designed on a dual layer substrate while the main reflectarray was placed on a single substrate. The dimensions of the patches of both reflectors were carefully optimized for the reduction of the cross polarization level, which was reduced up to −37.12 dB level. An almost same reduction in the side lobe level was also observed and a maximum gain of 35.18 dBi was achieved at 15 GHz. The advancement of the sub-reflector concept can be stretched for the higher frequencies. In a similar scenario a parabolic antenna was fed by a reflectarray sub-reflector at 94 GHz BIB001 as depicted in Figure 6 (b). The main purpose of this task was to improve the gain performance of the parabolic reflector with a tilted beam. The high gain antenna pattern was tilted 5 • by controlling the progressive phase distribution of reflectarray sub-reflector. A very thin quartz wafer was used to construct the 28×28 element reflectarray with conventional square patches. A maximum gain of 37.44 dBi was achieved, which shows the feasibility of using parabolic reflectors at higher frequencies with a reflectarray sub-reflector. However, at such a high frequency a small error in the reflection phase of sub-reflector can considerably affect the gain performance of the main reflector. VOLUME 6, 2018 FIGURE 6. (a) Reflectarray antenna with a reflectarray sub-reflector BIB002 (b) reflectarray sub-reflector for a parabolic reflector BIB001 .
A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 2) FEEDING MECHANISM <s> A new cell element is introduced for broadband reflectarray applications. The presented unit cell exhibits linear phase response which makes it a suitable candidate for broadband X-Ku band applications. This cell element consists of three concentric rectangular loops etched on a two-layer grounded substrate. The dimensions of the cell element have been optimized to achieve linear phase response in the operation band. A square offset-fed reflectarray of 40 cm × 40 cm was designed and fabricated based on this unit cell with wideband performance at X-Ku band. Considering three different feed positions, the whole reflectarray was simulated in CST and good agreement between simulated and measured results was observed. A maximum gain of 32 dBi was obtained which is equivalent to 58% aperture efficiency. Also, a remarkable value of 36%, 1.5-dB gain bandwidth was measured which is higher compared to previously reported designs in the literature. Another investigation that is carried out in this development through theory and simulation is determination of the effect of feed movement along the focal axis on the operating band of the reflectarray. It is shown for the first time that changing the feed location leads to a considerable shift in the operation bandwidth and maximum gain of the designed broadband reflectarray. © 2012 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2013. <s> BIB001 </s> A Review of High Gain and High Efficiency Reflectarrays for 5G Communications <s> 2) FEEDING MECHANISM <s> In this study, we propose a novel design method to significantly reduce the volume of reflectarray antennas. Unlike the commonly used approaches, the distance ( F ) between a source antenna and a reflectarray is largely narrowed in this work, which is no longer than $0.3\lambda$ . Accordingly, the total area occupied by the reflectarray can also be reduced with almost no decline in performance. Instead of a directional antenna, a simple omnidirectional dipole antenna is used as a feeding source, which has traditionally been expected to lower aperture efficiency of the proposed antenna. To solve this problem, an additional condition to maximize antenna gain in a target direction is suggested, which applies to both waves reflected from the reflectarray and radiated directly from the source. Moreover, by assuming that an incident plane wave coming from a far-field region, ambiguity in the polarization and incidence angle of the incident field impinging onto the reflectarray can clearly be removed. As a result, a relatively high aperture efficiency and antenna gain can be maintained in spite of the extremely reduced volume that is more than 700 times electrically smaller than the conventional ones. Good agreement between the experiment and the prediction confirms the validity of our approach. <s> BIB002
The effect of feed movement has been already analyzed for bandwidth improvement. The same technique was also used for the optimization of the gain performance of reflectarrays BIB001 . A unit cell with three rectangular loops was proposed for the construction of a full reflectarray. Three feed positions of 33 cm, 40 cm and 47 cm were selected for the examination of the reflectarray performance. It was observed at a frequency of 15 GHz that, as the feed moved from 33 cm to 47 cm the gain performance was improved from 26.8 dBi to 33.2 dBi. But the same effect was reversed for a lesser frequency of 10 GHz where gain was reduced from 31.6 dBi to 26.7 dBi for the selected increment in the feed distance. The reflectarray was actually designed at 10 GHz and the shifting in frequency was caused by the feed movement. Therefore an increasing feed position was also increasing the operating frequency with gain increment, in the same way lower frequencies were facing gain degradation. The gain enhancement requires a significant increase in the aperture area of reflectarray with a large focal length of feed. These two limitations of the high gain reflectarray antenna were significantly reduced by a technique defined in BIB002 . The directional feed was replaced with an Omni-directional dipole antenna feed by reducing the f /D ratio up to 0.3λ without compromising on reflectarray performance. The schematic of the design has been shown in Figure 7 where a direct wave from dipole feed was combined with the reflected wave to increase the gain performance of reflectarray. The reflectarray was made of variable size rectangular patches, operating at 1.84 GHz. The measured gain of 11.2 dBi was achieved through this approach which was 3.38 dBi higher than the predicted gain of the reference reflectarray antenna.