document
stringlengths
53
1.08M
summary
stringlengths
13
9.56k
their use. Hookah is a waterpipe used to smoke specially-flavored tobacco called shisha, which is heated with charcoal and filtered through the water-filled body of the pipe before being inhaled. LCCs contain pipe tobacco wrapped in tobacco leaves, come in flavored varieties, and are available with and without a filter and/or tip. Little cigars are about the same size as cigarettes, whereas cigarillos resemble full-sized cigars, but are thinner and shorter. See Figures 1 and2 for product images. Smoking tobacco in a hookah and use of LCCs is high among adolescents and young adults, even among those who do not smoke cigarettes . Recent data from the nationally representative Monitoring the Future survey show that among twelfth graders, 22.9% reported hookah use and 18.9% reported small cigar use in the past year . This same survey reported high rates of use among young adults , with 23.3% reporting hookah use and 18.6% reporting small cigar use in the past year . Use of these products is often associated with dual-or poly-tobacco use, including cigarette use . Cigars and cigarettes are the most common two-product combination among young adults over 18, with 3.5% reporting current dual-use of these two products . Hookah and LCCs cause many acute and chronic health consequences, including many of the same as cigarette smoking, such as cancer, heart disease, chronic bronchitis, and nicotine addiction . Additionally, many of the toxic constituents in cigarette smoke also exist in hookah and LCC smoke, such as carbon monoxide, tobacco-specific nitrosamines, and heavy metals, and are often found at higher levels than in cigarette smoke . However, adolescents and young adults erroneously believe hookah and LCCs to be less harmful and less addictive than cigarettes, and these misperceptions are positively associated with current use of these products . There is limited research on adolescent and young adult perceptions of hookah and LCCs, and few studies have used qualitative methods . Qualitative methods facilitate discussion in which opinions and beliefs are expressed that might not be discovered using quantitative study designs. Most research regarding attitudes and beliefs about hookah and LCCs has focused on why individuals use these products, such as the social nature of use, flavors available, and perceived lower health risks . However, what is not yet known is how adolescents' and young adults' attitudes and beliefs impact their perceptions about specific health issues associated with using these products. Understanding attitudes and beliefs about specific health issues associated with LCCs and hookah is important for developing effective messages to discourage use. In April 2014, the Food and Drug Administration proposed a Deeming Rule that would extend FDA's regulatory authority to cover all tobacco products meeting the statutory definition, including hookah and LCCs. As part of the proposed rule, the FDA will require the display of a health warning message directly on product packages and advertisements, to be implemented 24 months after the proposed rule is final. Additionally, the FDA will be responsible for communicating the risks of these products to the public . Therefore, this study examines adolescent and young adult perceptions of hookah and LCCs and their health effects to inform messaging themes to effectively communicate the risks of using these products. --- Methods --- Participants and Recruitment We conducted 10 focus groups with 77 adolescent and young adult OTP users and susceptible nonusers from February to April 2014. Focus groups were stratified by age group and user status, with groups ranging in size from four to 13 . We defined current users as those who reported OTP use in the past 30 days. We included those who had used or were susceptible to use any one or multiple OTPs because dual/poly tobacco use is high among this population , and because other tobacco products were also discussed . Nonusers who reported that they were willing to try any OTP were categorized as susceptible nonusers . Given the stratification of group composition, we had at least one group type discussing each product: adolescent OTP users, adolescent susceptible nonusers, young adult OTP users, and young adult susceptible nonusers. We recruited participants through purposive sampling methods across the Triangle region of North Carolina. Individuals were encouraged to visit the recruitment website and sign up to participate in a focus group session through several recruitment methods, including emails to various listservs, Craigslist, advertisements in college and local newspapers, local flyers, social media, radio and TV advertisements, and in-person recruitment. In-person recruitment was conducted at tobacco retail outlets, recreation centers, bars, coffee shops, colleges, and high schools. The recruitment website supplied interested people with detailed information about the study, contact information, and the option to complete an eligibility screener. Focus group participants were selected from the recruitment screener according to age and OTP use status. A research team member contacted individuals who were eligible based on the screener to confirm eligibility, and upon confirmation they were invited to the next available focus group. --- Focus Group Procedures Before the focus groups occurred, parents of adolescents were sent an informational letter about the study and given five days to respond if they did not want their child to participate. Before each focus group began, participants provided written informed consent or assent . The focus groups lasted approximately 90 minutes and were facilitated by a three-person research team consisting of a moderator, co-moderator, and note taker. Half of the focus groups focused on hookah and the other half focused on LCCs . Participants also discussed OTP constituents and electronic nicotine delivery systems in all focus groups; those data are reported elsewhere . At the conclusion of each focus group, we provided participants a handout about the harms of using hookah and LCCs, as well as resources for quitting tobacco . Participants received a $50 Amazon gift card for their participation. The Institutional Review Boards at the University of North Carolina at Chapel Hill and Wake Forest School of Medicine approved this study. Additional privacy protection was secured by the issuance of a Certificate of Confidentiality by the Department of Health and Human Services. --- Measures A semi-structured moderator's guide was used to facilitate consistent discussion across groups. Participants were asked questions regarding product familiarity, attitudes, and risks, such as, "What do you know about LCCs [or hookah]?" and "What are the good [or bad] things, if any, about smoking tobacco from a hookah [or LCCs]?" Next, participants were asked about a series of health effects resulting from use of these products, and if the possibility of these effects would discourage them from product use. For example, "If you found out that smoking LCCs causes cancer, how much would that discourage you from using, if at all?" Health effects questions were asked about cancer, heart attacks, addiction to nicotine, secondhand smoke exposure, likelihood of smoking cigarettes, and cosmetic effects. Participants were then asked which, out of all of those effects, would be the most discouraging. Product pictures were used to ensure that participants knew exactly what product was being discussed. --- Analysis The primary aim of our focus groups was descriptive and focused on understanding participants' perceptions of LCCs and hookah. Thus, our analytic approach was similar to grounded theory methods , in that the findings are 'grounded' in the data and developed inductively and in constant interaction with the data. Although we identified a priori research questions, we did not develop or test specific hypotheses. Focus group discussions were digitally recorded and transcribed verbatim by an independent transcriptionist. A codebook was developed using the moderator's guide, with codes for each question. This codebook was tested among the qualitative data analysis team using two randomly selected transcripts. Adjustments were made to create the final codebook, which was used by two pairs of coders to independently code the 10 transcripts in Atlas.ti 7.0, a qualitative data software program. Each team of two coders coded five out of the ten transcripts. After the initial round of coding, the coders met and discussed all discrepancies to ensure consistent application of codes. After coding was complete, new files were created by extracting each code into individual files. The first author read through each file to identify emergent and recurrent themes, created written summaries of the themes related to each research question, and met with co-authors to discuss and synthesize themes across age group and OTP user status. --- Results Table 1 reports characteristics of the participants. Overall, 56% were female, 57% were white, and 26% were black. The mean age was 15.8 for adolescents and 20.5 for young adults . Forty-seven participants were current users of OTPs and 30 were susceptible nonusers. Perceptions across focus groups were fairly consistent, with only a few differences noted below. --- Perceptions of Hookah and LCCs Social activity-Participants consistently discussed the social nature of using hookah and LCCs. These products are often used in social settings, such as parties, and their use brings people together. For example, one young adult OTP user said, "I think it's [hookah] more of a social thing. I don't know if anybody like does it on their own but if we go to a party or to a bar or something with a group of friends." Another young adult OTP user said, "It was more of like a group activity, like friends would go to stop at a gas station, pick up a couple Black and Milds and just chill out in the parking lot or something." Participants also discussed these products being used concurrently with alcohol. A young adult OTP user said, "You can make a social event out of it [smoking hookah]; you can drink while you're doing it. It's just fun." An adolescent OTP user said, "…when you drink, you just crave cigarettes or something to smoke on. Usually, they just smoke cigarettes or these [LCCs]." Participants said they often go to hookah bars to smoke with others rather than smoking alone, and that they meet new people there. Young adults, regardless of OTP use status, said that hookah bars were a fun place to go for those too young for traditional bars serving alcohol. Participants also reported that hookah and LCCs are used for smoke tricks. An adolescent OTP user said, "Lots of people, when smoking it [hookah], they prefer using this for smoke tricks because it's more smoke to get let out so it's easier for that." Participants stated that they liked to post videos of their smoke tricks on social media websites. A young adult OTP user said, "They're [LCCs] used for tricks…you can always get a lot of smoke in your mouth…someone takes a picture with a nice camera and there's a picture with you on Facebook with smoke coming out of your mouth and your nose and you're like, 'Oh, I've made it.'" Flavors-Adolescent OTP users and young adult OTP users and nonusers discussed the variety of available flavors for hookah and LCCs, listing several kinds they had tried or had heard of, such as wine, chocolate, and fruit. Adolescents especially liked that LCCs came in flavors, making them more enjoyable than smoking cigarettes. For example, one adolescent OTP user said, "Honestly, Swisher Sweets or Black & Milds are probably more enjoyable than cigarettes because cigarettes really put an awful taste in your mouth, and at least you have a flavor type thing." LCC-specific themes-Unique themes emerged that were specific to LCCs. All groups except adolescent nonusers discussed liking that LCCs can be purchased in small quantities, such as buying one or two at a time, and that they are relatively inexpensive. A young adult nonuser said, "There are some people that don't want to go buy a whole pack of cigarettes. They just want to smoke something. They'll go buy that [LCCs]." LCCs are also believed to last longer than cigarettes because people usually do not smoke an entire LCC at one time. A young adult OTP user said, "Like when they do [smoke], it's generally like a much kind of measured pace. Like I'll watch my friend who smokes Black & Milds…he'll have one and he'll short it when he's about halfway and won't take it all the way down and he'll do it [smoke] in multiple settings; I've noticed that with numerous people who smoke cigarillos or little cigars, is that they get what they need out of it and they short it and then they'll relight later." Young adult OTP users discussed LCCs as being substitutes for cigarettes when people run out of cigarettes. One person said "I know a lot of people that substitute Black & Milds for cigarettes. If they run out of cigarettes, they'll go and buy Black & Milds until the next day." Because participants were asked to discuss both the good and bad things associated with these products, some of the responses were contradictory within and across focus groups. Although many participants said they liked the flavors and social nature of LCCs, some adolescent and young adult OTP users also said that LCCs are just generally "gross" and have a bad taste. One adolescent OTP user said "It was gross. I was like, this is not fun, so I didn't even finish it." --- Health Effects and Potential for Discouraging Use of Hookah and LCCs All participants brought up health effects as "bad" things associated with using hookah and LCCs. An adolescent OTP user said, "Just the fact that if you do it [smoke LCCs] too much, it will probably kill your lungs. Well, not kill them but seriously mess them up." In general, participants understood that smoking tobacco in any form is bad for their long-term health. However, they said that because hookah and LCCs are smoked infrequently, they did not perceive health effects to be a big concern. For example, a young adult OTP user said, "I feel like it depends on how frequently you use it [hookah]. If you're like an everyday user then you have a little bit-well not a problem but you've got to slow down a bit if you're worried about your health so it just depends on how frequently you use it." The smoke from both hookah and LCCs was discussed extensively by all participants, especially adolescents, as a negative aspect of the products that oftentimes leads to immediate health effects, such as lightheadedness, nausea, and headaches. Some people discussed that they get lightheaded while smoking hookah because they smoke too much and are sitting in a "hazy" room full of smoke. Participants also talked about the smoke from LCCs as being unfiltered and very heavy and strong, especially compared to cigarettes. Participants discussed whether people are supposed to inhale LCC smoke, and what happens when they do inhale. A young adult nonuser said, "I don't know if you're actually supposed to inhale these or not. Cigars, I believe, are meant to be held in your mouth and that cigarettes go down that hatch." Adolescent users and young adult users and nonusers also discussed that they perceived the products to be less dangerous than cigarettes, such as having fewer or less severe health effects. Adolescents discussed this more extensively for LCCs. One adolescent said, "It [LCCs] seems like it's less bad for you because when you smoke a cigarette, I genuinely feel like I'm destroying my lungs. But when I smoke these, it didn't seem as bad." One young adult user said, "It's [hookah] definitely filtered, though, more so than a cigarette. If you do it right there's not supposed to be a direct flame on the tobacco. It's supposed to heat it up and vaporize it as opposed to directly burning it, I guess lessening the tar." People discussed hookah as being "pure" without the added ingredients and chemicals that are in cigarettes, leading them to conclude that hookah is healthier. For example, one young adult nonuser said, "It has less additives. I'm pretty sure it's just--I remember reading like different packets of it with my friend who had a hookah and she'd always try to put flavors stuff in it, it wasn't --I mean it still had like [a] warning on it, but it didn't have like as many additives. And I guess another thing; maybe you're not like burning paper and inhaling paper, too." Perceptions of discouragement-We were interested in how adolescents and young adults thought about specific health consequences of smoking hookah and LCCs, and if the possibility of those effects would discourage them from product use. Cosmetic effects: Overall, participants indicated that the potential for cosmetic effects, such as wrinkled skin and yellow teeth, would most discourage their use because these effects are more immediate compared to chronic health conditions. One young adult OTP user stated, "That [cosmetic effects] would discourage me because I want to look as young as I can for as long as I can." Adolescents discussed fears about looking older than their actual age. One nonuser said, "I don't want to age-like look like I'm thirty when I'm twenty. That doesn't sound like fun." Young adults also discussed ways to mitigate cosmetic effects, such as using shea butter lotion and teeth whitener. A young adult OTP user said, "That [cosmetic effects] already bothers me, but then I just keep gum and then I whiten my teeth every day." Cancer & heart attacks: Participants had similar beliefs about the potential risks of both cancer and heart attacks, and said these potential risks would not discourage them from product use of hookah and LCCs. This was because of the infrequency of use and beliefs that they are too young to experience these health conditions. One young adult OTP user said, "I don't think anyone smokes them [LCCs] often enough." An adolescent OTP user said, "I'm 16. I don't really think about having a heart attack. I know that might sound ignorant, but you don't really hear about people our age having heart attacks. I know it's a long term thing, but I just don't think that far ahead." Participants said the potential for cancer specifically would not discourage them from using hookah and LCCs because they are constantly told that many different products and health behaviors cause cancer. An adolescent OTP user said, "Wouldn't do it [discourage] that much. Because you hear almost everything causes cancer these days. Everything; that causes cancer and that causes cancer, and it will be a water bottle or something." Participants also discussed ways to mitigate the risk of having a heart attack. A young adult nonuser said, "… And I feel like I do other stuff like eat right and work out and stuff, so it would kind of balance it out I guess." Specific to LCCs, young adult OTP users said that they can mitigate risk by not inhaling the smoke. Additionally, participants believed that heart attacks would only happen if there was a family history of heart disease. Nicotine addiction & gateway to smoking cigarettes: Nonusers of OTPs said that the potential for addiction or leading to cigarette smoking would discourage them from using the products. One young adult said, "It would probably totally discourage me because I wouldn't smoke cigarettes. So if they potentially have the same effect, then I wouldn't smoke cigar or cigarillo or what have you." Participants who self-identified as dual users were not worried because they were already addicted to nicotine. Others believed the products are not used frequently enough to cause nicotine addiction. One young adult nonuser said, "I don't ever feel myself craving it [LCCs] or anything like that. It's just something to do, like if I'm drinking or something, it's something to do. But it's never like, oh, God, I need a Black cigarette…" Secondhand smoke exposure: Participants were mixed in their perceptions of secondhand smoke exposure effects and discouragement from using hookah and LCCs. They discussed that the smoke from both hookah and LCCs smelled good. Additionally, they said that the point of a hookah bar is to be in the smoky atmosphere, and that hookah is not usually used in public places except for hookah bars. The mixed feelings about secondhand smoke exposure were particularly salient for LCCs. OTP users stated that they were not concerned about exposing others because people know the risks associated with secondhand smoke, so they should avoid groups of smokers in public. An adolescent said, "They would just walk away I guess. If you don't want it, then just stand away." A young adult said, "I mean, I feel like people that are around you while you're smoking know; I think everyone knows the risks of secondhand smoke, it's pretty publicized. So if they're around you, then they know the risks and they're taking the risks themselves, and they're probably smoking with you, honestly." However, OTP users discussed avoiding smoking these products around children because they have less control over their environment. --- Discussion The purpose of this study was to understand adolescents' and young adults' perceptions about hookah and LCCs to inform strategies for developing messages to discourage use. Some of our findings echoed those of previous qualitative studies focused on these products, such as liking the flavors , the social nature of using the products , and general beliefs about health risks . However, several novel findings emerged from this study. For example, adolescents and young adults said they use LCCs and hookah concurrently with alcohol, and enjoy doing smoke tricks and posting videos of these tricks to social media. This suggests that the products are not necessarily used for nicotine consumption, but instead there is a perceived "coolness" to using them. Additionally, it raises concern for the co-use of both alcohol and OTPs, as extensive literature has demonstrated the reinforcing effects that alcohol and nicotine have on each other . Additionally, the co-use of alcohol with these products reinforces the need for comprehensive smoke-free air laws. Participants also commented that hookah bars are an activity for those who are too young for traditional bars serving alcohol. Thus, social media platforms and hookah bars might be effective places to implement risk messages, as these appear to be fruitful channels for reaching youth who use these products. These novel findings help us better understand how these products are used, and why they are appealing to some adolescents and young adults. Our data have implications for risk communication strategies to inform message development by highlighting potential messaging themes for campaigns or warnings that may be effective in discouraging adolescents and young adults from using hookah and LCCs. First, messages could focus on correcting misperceptions by increasing knowledge and beliefs about the immediate risks and consequences of using hookah and LCCs even if used infrequently. This study focused on specific long-term health effects to better understand whether messages focused on health effects could discourage use of hookah and LCCs, which has not been addressed in previous research. In general, although they acknowledge the potential for health problems, participants were not worried about the potential long-term harms because hookah and LCCs are used infrequently. However, participants indicated that they get lightheaded from smoking and attributed it to being in a "hazy" room, but they may not attribute this symptom to carbon monoxide intoxication. Because we did not ask about the acute effects during the focus groups, additional research is needed to determine the effectiveness of this message theme, and compare it to messages focused on long-term health effects to help researchers better understand what strategy will be most effective at reducing hookah/LCC use among adolescents and young adults. A second messaging theme is cosmetic effects of using OTPs. Participants said that messages around cosmetic effects may be effective because yellow teeth and wrinkled skin are more immediate and salient effects than other outcomes, such as cancer. Campaigns focused on cosmetic effects have shown promise in preventing and decreasing cigarette use among adolescents and young adults , although other studies have reported that messages with cosmetic themes are ineffective . The FDA's The Real Cost campaign has focused several advertisements on highlighting the cosmetic effects associated with cigarette smoking, and evaluation of that campaign is currently underway . Third, messages could focus on the dangerous chemicals in OTP smoke. Compared to cigarettes, participants generally perceived the products as more pure with fewer additives. Messages focused around the toxic chemicals in OTP smoke could correct the misperceptions that these products are generally safer than cigarettes. In addition to providing potential messaging themes, this study highlighted that more research is needed to identify effective warnings for hookah and LCC packaging and industry advertisements that would discourage use for all people. A recent meta-analysis indicated that warnings on cigarette packs that contain graphic imagery have greater effects on credibility, negative brand attitudes, and intention to not start smoking, among many other outcomes . Although most studies have focused on adults, some studies do indicate similar effects for adolescents and young adults . In addition to eliciting a strong, negative emotional response, one reason why pictorial warnings may be effective is because they supplement the knowledge that many individuals already have about the risks associated with smoking cigarettes with images that may make those negative consequences seem more concrete and real. If people do not perceive hookah/LCCs to be risky because they are used infrequently, then the addition of pictorial warnings may be ineffective. However, it is also possible putting pictorial warnings on LCCs or hookah may underscore that they are just as harmful as cigarettes since our participants used cigarettes as a threshold for risk. The proposed FDA Deeming Rule states that the FDA will require the display of a health warning message on packaging and advertising for OTPs. Our results suggest this warning might have differential impact depending on OTP use status. OTP users were not worried about addiction to nicotine because they were already addicted or using other nicotine products, whereas nonusers expressed more concern about the potential for nicotine addiction. The impact of this warning might be stronger for susceptible nonusers than users. Our findings also have implications for FDA regulations regarding hookah and LCCs . Participants stated that they liked the different available flavors for hookah and LCCs. Flavored tobacco products are more popular among adolescents and young adults compared to older adults , so flavor bans, similar to those implemented for cigarettes under the Family Smoking Prevention and Tobacco Control Act, may reduce the appeal of these products to adolescents and young adults. Adolescents and young adults reported that LCCs are more affordable than a pack of cigarettes because they can be purchased in small quantities, which may encourage use. Increasing minimum pack sizes for LCCs would increase the cost, potentially reducing the appeal of LCCs to adolescents and young adults, who may have less spending money than adults. Increasing prices of cigarette packs has been a very effective tobacco control policy, leading to smoking cessation and prevention . --- Limitations Our findings may have limited generalizability because the study was conducted in one region in North Carolina. Results may differ for people from different racial, geographic, economic, or socioeconomic backgrounds. However, some of our findings were similar to previous qualitative studies that included specific sub-populations, such as African Americans and urban communities . Additionally, our sample was highly educated. Although education attainment was not measured, many young adult participants were college students, so findings may not translate to young adults who are not in college. Also, the focus groups were structured so that no groups talked about both hookah and LCCs, so these products were not directly compared. Finally, being a current OTP user did not necessarily mean being a current hookah or LCC user. --- Conclusion This study provides an enhanced understanding of adolescents' and young adults' perceptions about hookah and LCCs to inform risk communication message development to discourage use of these products. Overall, participants were not worried about the long-term health effects associated with hookah and LCCs, but were more worried about the immediate consequences. Future research should focus on ways to creatively and effectively deliver messages to correct misperceptions regarding the risks associated with hookah and LCCs by highlighting the acute/immediate consequences of using these products. Cigar Product Types Source: Wake Forest Baptist Health Waterpipe Apparatus Source: Wake Forest Baptist Health
Use of hookah and little cigars/cigarillos (LCCs) is high among adolescents and young adults. Although these products have health effects similar to cigarettes, adolescents and young adults believe them to be safer. This study examined adolescent and young adult perceptions of hookah and LCCs to develop risk messages aimed at discouraging use among users and at-risk nonusers. Ten focus groups with 77 adolescents and young adults were conducted to explore their perceptions about the perceived risks and benefits of hookah and LCC use. Participants were users of other (non-cigarette) tobacco products (n=47) and susceptible nonusers (n=30). Transcripts were coded for emergent themes on participants' perceptions of hookah and LCCs. Participants did not perceive health effects associated with hookah and LCC use to be serious or likely to happen given their infrequency of use and perceptions that they are less harmful than cigarettes. Participants generally had positive associations with smoking hookah and LCCs for several reasons, including that they are used in social gatherings, come in various flavors, and can be used to perform smoke tricks. Because adolescents and young adults underestimate and discount the long-term risks associated with hookah and LCC use, effective messages may be those that focus on the acute/immediate health and cosmetic effects.
Introduction Schools are institutions that both mirror the power relations of the wider world and actively produce them, including norms and behaviors informed by heteronormativity, meaning the assumption that heterosexuality and cisgender identity are the most normal and natural state of human sexuality and gender . Heteronormativity is communicated in schools overtly through the presence of homophobic and transphobic language, bullying behavior, gendered dress codes, and rules prohibiting "public displays of affection" . Heteronormativity is also communicated covertly through school spaces , policies and practices , and widely shared values . School environments thus perpetuate structural stigma, referring to the mechanism through which institutional policy and practice and larger societal norms erase, discriminate, and victimize LGBTQ+ populations . By decreasing or foreclosing social safety, which is manifest through social connection, inclusion, and protection , and allowing minority stress experiences to occur , structural stigma contributes to negative mental, physical, and academic outcomes for LGBTQ+ populations . These outcomes include dropping out of school , self-harm , substance use , sexually transmitted diseases , and poor mental health . Conversely, schools can bolster LGBTQ+ health by improving social safety and decreasing minority stress through practices that resist heteronormativity and disrupt structural stigma. The Centers for Disease Control and Prevention identify six practices that improve school culture and climate for LGBTQ+ youth . These practices include identification of safe spaces on campus, such as Genders and Sexualities Alliances , where LGBTQ+ youth can receive support from administrators, teachers, other school staff, and peers; prohibition of harassment and bullying based on sexual orientation or gender expression; provision of health education curricula inclusive of LGBTQ+ youth; professional development of staff on safe and supportive school environments; facilitation of access to medical providers experienced in serving LGBTQ+ youth; and facilitation of access to behavioral health providers experienced in serving this population. Mounting evidence demonstrates that these schoolbased supportive practices can positively affect the health and wellbeing of LGBTQ+ young people. The presence of safe spaces, GSAs, non-discrimination policies, inclusive curricula, and affirming school staff is associated with reduced homophobic victimization , increased school belonging , and perceptions of greater safety at school . These improvements in social safety and reductions in minority stress experiences lead to decreases in risk behaviors , improved academic outcomes , and better mental health . Finally, schools are key sites for prevention, screening, treatment, and referral to healthcare services for LGBTQ+ and other underserved youth who would otherwise face barriers to accessing appropriate, competent, and affirming care . There is thus a strong public health need to enact institutional change in schools to disrupt the processes of structural stigma and cultivate environments that are affirming, supportive, inclusive, and explicitly protective of LGBTQ+ populations . At the same time, schools are hierarchical and bureaucratic systems in which control and use of power pose challenges to implementing new practices. For example, the priorities set, decisions made, and resources directed by the upper levels of school administration shape school systems and limit the types of actions, activities, and behaviors allowed within their purviews. Recent work in implementation science that draws upon the writings of philosopher Michel Foucault showcases a typology of power that can be generated and leveraged in implementation practice. The three types are discursive power , epistemic power , and material power . These three types and their interplay enable and constrain possibilities for implementation efforts; importantly, not all stakeholders are equally able to wield each type. Just as the use of power can fulfill a dominant function or a resistant function , the same stakeholders in implementation contexts may wield power for both dominance and resistance . Initiating institutional change to improve health equity for LGBTQ+ youth thus requires operating within the complex webs of power comprising school environments and considering the constraints and possibilities afforded to staff and students through the control over discursive, epistemic, and material power . For the present analysis, we examine the role of power in the work of school professionals to implement the six CDCidentified LGBTQ+ supportive evidence-informed practices , outlined above . Our central question is: How does power operate to hinder or promote the ability of school staff to change school environments to increase safety and support for LGBTQ+ youth? Ultimately, we interpret the work of school staff in carrying out the EIPs as an exercise of resistant power against structural stigma and the dominant powers exercised through school and community social hierarchies. --- Study background We conducted a 5-year cluster randomized controlled trial in 42 secondary schools across the rural state of New Mexico . Entitled "Implementing Strategies to Reduce LGBTQ+ Adolescent Suicide" , the study examined the uptake and sustainment of the six CDC-identified EIPs for improving school support, safety, and mental health for LGBTQ+ youth. The Exploration, Preparation, Implementation, and Sustainment framework guided EIP implementation. EPIS is a fourphased implementation framework that emphasizes the careful examination of outer and inner contexts and bridging factors during the Exploration phase to inform future activities in the Preparation and Implementation phases . Outer contexts represent the higher levels of influence on schools , such as legislation and district policy, community-level advocacy, stigma, and funding . Inner contexts encompass the internal environments of schools, including physical and social organization and the attitudes and practices of staff and students . Important inner-context factors are school readiness for change, perceived need to change practices, school and staff values, and attitudes toward EIPs. Bridging factors that connect the outer and inner context include communityacademic partnerships, coaching support, and formal contracts or memoranda of understanding . We paired EPIS with the Dynamic Adaptation Process , an iterative data-informed methodology for tailoring each step of EIP implementation to the specific schoolcommunity contexts . Central to the DAP was the formation and training of Implementation Resource Teams in each school, comprised of administrators, teachers, staff, and occasionally students. These IRTs were charged with local assessment, planning, and implementation of the six EIPs, and supported by an implementation coach. We invited 145 New Mexico public high schools to take part in RLAS. As described elsewhere , school eligibility required student participation in the New Mexico Youth Risk and Resilience Survey , the state's extension of the CDC's Youth Risk Behavior Surveillance System, for outcome monitoring purposes. Additional inclusion criteria were the presence of a school professional and a high-ranking school administrator willing to support EIP implementation. However, the relative autonomy of schools to participate varied across the state. For example, some districts required the approval of their internal research review boards, whereas others required approval from a superintendent, principal, or both. Administrators of schools declining participation cited general impediments , concerns about fomenting negative reactions in conservative communities, assumptions that there were no LGBTQ+ students on campus, and the idea that LGBTQ+ students did not warrant specialized interventions . We later discovered that such beliefs were also present in the schools that enrolled in RLAS . In the end, we recruited 42 schools. After randomization into implementation and delayed implementation conditions, we assessed the baseline capacity of participating schools to implement the EIPs, including school needs, facilitators, and barriers . Schools in the implementation condition received tailored guidance and support from trained coaches for 3 years, followed by a 1year sustainment period without coaching support. Those in the delayed condition received guidance and support from the coach in the final year of RLAS as the original implementation schools entered their sustainment phase. All schools received an annual payment of $500 to offset the costs involved in participation. --- Materials and methods --- Study context We conducted RLAS in the largely rural and culturally diverse state of New Mexico, USA. The state ranks 46th in median household income and has the third-largest percentage of residents below the poverty level in the nation . Hispanic/Latinx and Native American people comprise 60% of residents . In New Mexico, about 5.9% of adults and 14.5% of high-school students identify as sexual minorities ; 0.67% of adults and 3.2% of high-school students identify as gender minorities . The 2019 NM YRRS found that nearly half of the youth identifying as LGBTQ+ had experienced nonsuicidal self-injury, symptoms of depression, or suicidal ideation Imp., Implementation. in the past year . Further, almost a quarter of sexual minority youth and a third of transgender youth had attempted suicide in the past year . When we initiated RLAS in 2016, only 17% of secondary schools in New Mexico had implemented all six of the focal EIPs . --- Data collection Between 2016 and 2021, we collected qualitative data annually with key staff from participating high schools. Each year, we invited the IRT leads and administrators from each school to participate in 60-min semi-structured interviews and IRT members to take part in a 90-min small group interview. Across the 5 years of data collection, a team consisting of five anthropologists and two master's level research assistants conducted the interviews, which were audio recorded and professionally transcribed. Questions in the semistructured interview guides focused on implementation efforts and power structures. In addition, they examined knowledge of and comfort with LGBTQ+ youth, efforts to implement EIPs, and additional relevant outer-context factors per the EPIS. Table 1 describes the samples for the annual interviews. Bi-weekly debriefing meetings with study coaches, research staff, technical assistance experts, and principal investigators facilitated ongoing discussions of unfolding events and themes. The Pacific Institute for Research and Evaluation Institutional Review Board approved the research protocols and informed consent procedures. --- --- Analysis process Four researchers, including two anthropologists and two master's level research assistants, conducted qualitative analysis using NVivo [Release 1.3 ], a qualitative data analysis application. The researchers iteratively coded professional transcriptions of interviews, compared between schools and participant types, and undertook targeted searches for specific concepts as needed. They applied deductive and inductive coding to identify themes and emergent patterns in the data, including inner-and outer-context variables and implementation status for each EIP. Example codes included "Safe Zones, " "LGBTQ 101, " "bathrooms, locker rooms, " and "access to behavioral health providers." We maintained intercoder agreement during the routine review, discussion, and interpretation of coding output at biweekly team meetings. To facilitate cross-time-point and cross-site analysis, we synthesized data from coded transcripts, implementation coach activity logs tracking engagement with school sites, and notes from debriefing meetings into comprehensive school reports. These extensive case summaries chronologically described changes in the school-community environments between the baseline assessment period and the study's final year. Changes centered on the six focal EIPs, innercontext factors , and outer-context factors according to EPIS phases. We then analyzed school reports and engaged in yet another round of coding where we employed sensitizing concepts related to power, i.e., "dominant, " "resistant, " and "material" . Finally, we considered how the data relate to concepts of heteronormativity, how heteronormative discourses inform and are informed by school power structures, and how power is exercised to enable and constrain action within schools' hierarchical governance structures. --- Results We organize our findings into four sections. First, we focus on school characteristics affecting the ability of IRTs to enact change in schools. This section illuminates the important role material power plays in implementation processes. Second, we discuss the dynamics described by participants as influencing their pursuit of changes, including the possible risks involved, expected, or encountered opposition, and the successes they experienced. These dynamics reflect how heteronormativity and structural stigma shaped the discursive power of decisionmakers in schools. Third, we analyze how formal and informal leadership affected IRT efforts to implement EIPs, illustrating a web of power relations spanning outer and inner contexts and the usefulness of mobilizing formal hierarchies in schools to aid implementation. This section clarifies the influence of control over material power, the role of discursive and epistemic power in leveraging material power, and how leadership can serve as a mechanism for defending implementation efforts against power exerted by communities, parents, and staff to perpetuate structural stigma intentionally or unintentionally. Finally, we examine how formal authority imbued in policy can impact supportive intervention for LGBTQ+ youth. These findings demonstrate how participants thought about their own power to promote institutional change and the role of policy in perpetuating or disrupting structural stigma. --- Constraints of school characteristics Two common and interrelated characteristics of New Mexico schools posed challenges by limiting the availability of material power for implementation: staff turnover and scarcity of resources. Participants explained that resource scarcity in terms of time constraints, low pay, high job stress, and living and working in underserved areas contributed to high turnover. Turnover was a common problem for all our participating schools, regardless of geographical location. A small rural school exemplified these difficulties: even before the implementation period officially began, recently hired administrators and multiple staff who had agreed to serve on the IRT moved on from their job posts. This instability in staffing led the school to withdraw from the study. School professionals elsewhere described being overextended due to staff shortages, clarifying that they could not fulfill their day-to-day job functions and fully support students. A nurse in an urban school explained, "I am alone a lot of times. . . . It's chaotic. I have these students that I wish I could spend more time with them and help, but I can't. . . Who can I call? I need a backup here." Participants also highlighted the tendency for a limited number of personnel to take on extra work responsibilities. The self-selected composition of IRTs exemplified the issue, as most members served on other school committees and as sponsors of student activities . Despite often being able to build on experience and cache accrued through such involvement to enable EIP implementation, these individuals struggled against feeling overwhelmed due to competing demands for their time and energy. Participants ubiquitously commented on the scarcity of material resources that contributed to the staffing shortages and subsequent hardships in changing health curricula. For example, after IRT members donated their time to identify, vet, and obtain approval for a new LGBTQ+ inclusive health curriculum from their rural school's administration, the school struggled to maintain a health education teacher on staff long enough to implement it. The physical environments of school buildings constrained efforts at several sites to implement best practices for creating safe spaces. For example, having gender-neutral restrooms was a noted source of tension, as there were few or no single-user restrooms-often the most feasible to modify from gendered to gender-neutral-available for students in most schools. In describing an IRT's progress in establishing gender-neutral restrooms, one school administrator disclosed, "We struggle with gender-neutral bathrooms, and it's not because we don't care. It's because of the age of our buildings. This building we're sitting in is as old as I am, and I'm 57. So, imagine that." The administrator added, "We're a poor district, " and lamented the difficulties of adapting existing infrastructure and how resource scarcity foreclosed possibilities for change. The universal condition of scarcity among participating schools impeded material power to support EIP implementation by IRTs. --- Concerns about community perceptions and backlash The changes prioritized by IRTs were influenced by speculations of whether school and community stakeholders would view them favorably. Participants in rural and suburban areas often described their communities as socially conservative, noting the omnipresence of "traditional" gender roles, religiosity, and low knowledge and awareness of LGBTQ+ populations. They described outright discrimination, prejudice, or ignorance to which they were privy or had experienced firsthand and cited concerns about parental disapproval of EIPs and potential social or professional repercussions. These concerns illustrated the power exercised by communities over schools; staff recognized how mechanisms of structural stigma facilitated the exertion of this power. Both assumptions about community reactions and the lingering impacts of past experiences compounded with the difficulty of initiating projects in schools and created hesitation among staff. In some schools, concerns about community pushback led IRTs to focus on EIPs that members perceived as less controversial, such as providing non-mandatory professional development opportunities for staff rather than establishing a GSA that would directly involve students. Many participants had knowledge or experiences of pushback from school leadership, parents, and community members to prior school-based initiatives to support LGBTQ+ students. Their recollections of such pushback revealed how community sentiments influenced the school leadership's use of discursive power. Such recollections also influenced how participants conceptualized their roles and power in school and community hierarchies while discouraging them from modifying aspects of school climates. For instance, an IRT lead at an urban school expressed worry about the professional consequences of involvement with efforts considered "controversial" in the community, citing backlash resulting in job loss for a teacher who tried starting a GSA 25 years prior. At another urban school, an IRT member cited illfated efforts some years back to advocate for a gender-neutral dress code for prom. Vocal community members protested, prompting increased enforcement of the old dress code by the school administration. This participant stated that the protest led to her professional marginalization in the school district: "I've paid a price. I'm stuck here. I won't ever move anywhere up." In sum, it was common for knowledge and experience of past events to temper participants' perceptions of what types of changes were possible and thus their motivation to implement the EIPs. Some LGBTQ+ participants anticipated negative responses to their efforts to implement EIPs. For example, a self-identified lesbian teacher at a small urban school expressed wariness that being involved would be perceived as personally motivated or self-serving. She stated: A challenge for me is just always that. . . because I'm out, right, I'm seen as like, 'Oh, this is a personal agenda, ' and of course, it's deeply personal, but I'm not sure that it is an agenda, and I just know that it's like, these are the things that that have proven to work, right? So, this is what we're doing to get this result and to do what's right for kids. This participant's tentativeness to become an implementation champion was grounded in her experience as a sexual minority and intimate knowledge of how heteronormative discursive power might negatively affect others' perceptions of her involvement. Concerns about parental backlash influenced how schools approached EIP implementation. In some cases, the power exerted by communities was successful, as illustrated by an IRT in a conservative urban area that wanted to post sexual and reproductive health resource flyers in school bathrooms. Members approached the principal, who requested that they obtain approval from the parent-teacher association . After discussing the matter, most parents in the PTA were open to making the information available, conveying the perspective that "other" children-not theirs-might need it. Yet, one upset parent threatened to make a huge scene. As a result, the PTA denied the IRT's request, and the principal followed suit. In other cases, participants sought to circumvent the influence of communities through their selection of EIP elements and the scale at which they implemented them. A principal at a suburban school described a need to help LGBTQ+ students without drawing the attention or ire of parents or the community. He expressed support for the EIPs and wellbeing of LBGTQ+ students but raised concern that the surrounding community was deeply religious and upheld conservative and traditional values. Moreover, the school district had no established policy regarding supportive practices for LGBTQ+ students. The principal stated, Whatever gains we make, like the group [GSA] and these kinds of services we provide, we're trying to do them carefully. We're trying to make sure we don't expose our LGBTQ population to a potential problem or a negative reaction. And so we're very careful about ensuring that we're not exposing them to that kind of potential harm. The sentiment of not wanting to place LGBTQ+ students in harm's way was common, especially in the study's first year, and was often used to rationalize hesitation or inaction related to EIPs. Other participants expressed apprehension about opposition from parents and communities. However, they proceeded with EIP implementation, reasoning that the benefits for students outweighed the anticipated repercussions. The participants from a small rural school wanted to start a GSA but fretted over possible criticism from parents and the community. They ultimately decided to move forward and reportedly received no complaints despite their fears. In fact, the GSA drew students and was quite successful. When asked how families or community members responded to IRT actions at another school, the administrator said: Well, for the most part, I think positive... And then I've also heard of a parent complaining and saying something to the administration along the lines of, 'You might as well take that American flag down and put the Gay Pride flag up.' Community pushback was not necessarily an insurmountable obstacle to change. For example, although an IRT at an urban school that worked to implement nongendered security entry lines received complaints from some parents, the principal defended the policy and kept it in place. In another urban school, an IRT organized a suicide prevention presentation that included a video and a panel discussion on supporting LGBTQ+ students. Afterward, an IRT member described pushback from parents who thought the LGBTQ+ focus was "too much" but clarified that the principal was "extremely supportive and she's never afraid to stand up to a parent." Rather than short-circuiting implementation, directly challenging the adverse reactions of parents and other community members could make it possible to move forward with new initiatives. These findings illustrate how community influence on school contexts can affect implementation. For some school staff, experiences and expectations of failures and pushback informed their perceptions of the risk and potential for success in implementing LGBTQ+ supportive practices; and in some instances, standing up to community and parent disagreement could assist in carrying implementation forward. To varying degrees, community and families influenced what the IRTs could accomplish, as did leadership, as described below. --- Power of leadership The significant power of administrators in school hierarchies influenced the motivation and ability of staff to take action to implement EIPs. Administrators were gatekeepers to material power . Typically, those who were unsupportive cited community pressure, resource constraints, and individual beliefs and attitudes about the necessity and feasibility of implementing EIPs. The barriers presented by unsupportive administrators necessitated adaptations by the IRTs, such as changing which administrator they partnered with or prioritizing smaller actions within their spheres of influence. In one rural school, the IRT and its coach determined that the principal who originally agreed to the study did not support the EIPs because of their explicit focus on LGBTQ+ students. Upon the coach's recommendation, the IRT enlisted the support of another school administrator identified as having more time and interest in their work. Going forward, the members reported greater communication and support from leadership and progress toward implementing EIPs, including establishing a GSA. In an urban school with non-responsive leadership, participants consulted with their coach about ignored requests, emails, and other efforts. Even their implementation coach's outreach was met with non-response. Despite the discouraging disengagement, the IRT independently organized well-received after-school educational presentations for school staff, reaching out to community-based intermediary organizations to deliver the content. In many instances, participants worked around obstacles created by leadership. However, formal power configurations in schools meant administrators had final decision-making authority. Unfortunately, this authority sometimes lent itself to schools not being able to move forward with implementation efforts either of specific practices or the entire project. The participation of one school in a rural conservative community prematurely ceased when a new superintendent unilaterally opposed school involvement in efforts to address the needs of LGBTQ+ students and instructed the principal to withdraw despite the interest and commitment of school staff. The nurse at this school expressed her disappointment, saying: I'll even call them [students] and talk to them if I haven't seen them in a while. I will call them out [of class] and talk to them, but now the principal is telling me to leave it alone. The thing is that as a school nurse, he can't tell me what to do and what kids to see. I have a right to see any student, any student has a right to see me, they have a right to talk about anything they want to, and the principal has no say. In contrast, many school administrators leveraged discursive and material power to support IRTs, enabling them to make changes beyond their scopes of influence, such as adjusting school policy, initiating structural changes to the environment, or fielding opposition from the staff and community. The assistant principal overseeing facilities in an urban school was instrumental in helping the IRT establish a single-user, gender-neutral restroom and brainstorming strategies to make behavioral, sexual, and reproductive health information readily available to students at on-campus resource hubs. In addition to directly supporting change within schools, administrators defended the EIPs against critics. On the topic of professional development related to transgender students, IRT members at an urban school offered examples of the principal's support: [The principal] had no problem making sure that all of our staff participated in that training. . . . We did have a couple of hecklers in there. She was willing to make sure that that didn't distract from what we needed to know in order to keep all of our students safe and providing that information to all of our teachers. In another urban school, students supported by the IRT endeavored to change the school's homecoming court, which typically featured a "king" and "queen, " by designating the winners of the competition as "royalty." This change, however, spurred some parents to complain. Aware of the nature of the student-led change, the principal assumed the responsibility of addressing parental demands to retain the traditional court. IRT members agreed that the principal took a rather radical stance in the context of this community by defending the new terminology and that students and staff were grateful to know that she "had their back." School leadership's support was not static, with many growing into their willingness and ability to foster implementation over time. In some cases, education on the needs of LGBTQ+ youth in their schools catalyzed major shifts in administrative support. The principal of a school in a conservative rural community espoused the belief that their school was safe for LGBTQ+ students. This school's IRT supported a gender-diverse student in sharing with the principal their experience of fearing violence in school and "being jumped on the way home." Both the principal and IRT described this conversation as a "game-changer" for implementation. Seeing the school through a student's eyes made the principal shift gears to support EIPs to nurture the safe and supportive environment she had presumed existed. We observed such shifts in other schools over time, albeit ones that tended to be more subtle in their manifestation. Administrators in the early years of RLAS generally lacked awareness of LGBTQ+ student needs, including their higher risk for suicidal behaviors. This starting point contrasts with interviews in later years when the same administrators displayed a strong understanding of the differential and increased support needs of LGBTQ+ students. This instance illustrates how epistemic power generated through the IRT highlighted otherwise unknown perspectives of students to shift the discursive power of leadership in their favor. Similar to staff turnover, instability in school leadership could compromise implementation progress. New administrators were often overwhelmed by their positions and had insufficient familiarity with schools, which rendered them unhelpful to the IRTs. Yet, turnover in leadership also introduced IRTs to new allies. In one urban school, 2 years of repeated requests to transition single-stall staff restrooms to gender-neutral restrooms accessible to students finally received a positive response when an incoming principal quickly agreed to the change. An IRT member praised his receptiveness, referring to it as "incredible, incredible" compared to "the first couple years of the grant that we were running into walls when it came to the gender-neutral bathrooms." The IRT member added, "That was a two-year battle, and then this guy came in, and poof, it just happened." Similarly, the new principal agreed to make time for all-staff professional development that had also been stalled for 2 years, enabling the IRT to organize training on LGBTQ 101 and Safe Zones for 50 school personnel. This change in administration further demonstrated the consolidation of discursive and material power at the top tiers of school leadership and the role of administrators in constraining or enabling action. Many participants suggested that aligning themselves with RLAS itself was empowering. They felt that their status as IRT members gave them some power as the study was perceived to have official expectations beholding the school. One IRT member stated, "Now that we're overseeing this program, I feel more of a responsibility to make sure that I know what's going on with their GSA club and everything else going on throughout our whole campus." This sentiment was echoed by multiple other participants, underscoring the potential and importance of generating discursive and epistemic power within involved staff members. While formal leadership could easily constrain IRT actions through lack of buy-in and the use of their considerable discursive and material power, the IRTs able to negotiate and leverage the power of administrators in their schools found the most success with implementation. Leadership could garner the necessary resources for innovations and use their authority to challenge negative pushback from community and staff members. --- Power of policy Participants varied in their perspectives on the power of policy and their power over policy. We documented several key dynamics regarding policy in participating schools. First, some IRTs deprioritized policy implementation, including one of the six EIPs on bullying and harassment prohibitions, because they believed their schools' current practices were safe and supportive. Second, policies already explicitly including protections for sexual orientation and gender identity and expression led participants to believe their schools had fully implemented this EIP, regardless of the absence of followthrough, training, or other mechanisms to translate policy into practice. Similarly, new policies at the state level had the potential to impact all schools, but a lack of dissemination and enforcement stifled change. Participants sometimes hesitated to address policy, asserting it did not fall within the purview of their roles vis-à-vis the school or the district. In contrast, several IRTs initiated reviews of current policies at the school and occasionally district levels and then took action to change them. These dynamics provide insight into participants' perceptions about their ability to initiate policy change at an institutional level and the discursive power of policy to perpetuate or disrupt structural stigma. Participants' lack of awareness of details regarding bullying and harassment policies, restroom rules, dress codes, and gender support plans , often shaped their perceptions of how protective school policies were of LGBTQ+ students. For example, while participants might believe that a bullying policy protecting all students existed, they were unaware of its specifics or possible deficits. An IRT lead stated, "As far as our policy, I'm not sure, but I'm assuming that it's in there that we treat everybody equally." Administrators espoused similar views, often claiming that existing bullying policies protected all students, including LGBTQ+ youth. Administrators and other participants reasoned that existing bullying and harassment prohibitions that did not enumerate sexual orientation or gender identity and expression were sufficient to support LGBTQ+ students. They cited as evidence the fact that students and faculty reported bullying to the administration. For example, one administrator reasoned, "Bullying is bullying, right? And all different types of students experience bullying, so we do have our bullying policy. I think our students feel safe reporting bullying incidents." Other participants cited their current practices as evidence against adopting more robust policies. A second administrator explained, "We don't have specific policies, written policies. They're just unwritten rules that we have in the school that people are aware of." Similarly, a third administrator said, "We've always worked case by case, individual by individual, and work to find the best solutions with that. And so, yeah, I've worked in the absence of policy, but I also felt that we worked-and I've worked, certainly-with the best interest of the students." Without an official policy, attempts to implement changes in schools sometimes boiled down to the whims of school leadership, which was frustrating for participants. In one rural conservative community, the school permitted students to change their names on identification cards, and the administration encouraged teachers to use students' chosen names and gender pronouns. There was no written policy-it did not seem necessary to staff since the school climate was welcoming and supportive. However, when school leadership changed significantly, the new principal single-handedly ended the practice and even advised teachers against using chosen names and pronouns in their student interactions. IRT members claimed they had nothing to stand on to influence the administration to change its stance as no official policy was in place regarding chosen names and pronouns. Participants at some schools recognized the importance of policy change to enable supportive action and safeguard implementation initiatives over time. One IRT lead sharing this perspective said, "If we don't put policy in, then when those of us who are here and doing trainings and working with staff leave, then you go back to the old ways. So, policy has to happen to keep things moving forward." The extent to which communication regarding new or existing policies occurred in school communities was questionable, given delays in sharing information and insufficient awareness. An IRT member involved in school policy change underscored the importance of communicating the new policies to staff and students and expressed concern that students may lack awareness: The word isn't getting out. What I'm going to propose now is that once staff gets that weekly bulletin with the school policies, that they start reading them to kids in the morning and then talking about it, 'What do you think this means?' That's my next step. One IRT successfully changed bullying and harassment policies at its school to explicitly protect LGBTQ+ students. However, over the study's course, this IRT's school experienced significant staff turnover that hampered effective communication about their content. Consequently, newer staff, including IRT members and students, were commonly unaware of the revised policies. Without communication, the revised policies fell short in supporting LGBTQ+ students. In 2019, the New Mexico legislature passed the pivotal "Safe Schools for All Students" Act, requiring public schools to adopt bullying policies with explicit protections for LGBTQ+ students . However, we found that many study participants were unaware or only marginally aware of the legislation in the two school years following its enactment. Some IRT members with knowledge of the legislation attempted to offer trainings and disseminate information on the new policy. They had varying degrees of success, depending on how proactive their school districts were in supporting and sharing information about the legislation. In conversation with implementation coaches, participants tended to rate the feasibility of policy change related to bullying and harassment lower than other EIPs, such as establishing safe spaces or facilitating professional development for school staff. An IRT member explained that during their first year of implementation, the team had relative ease organizing professional development and safe spaces. However, when this member looked ahead to the next school year, they described impacting policy as daunting: "Now I'm like, 'Oh my god, policy?' Like how are we-? That's intimidating." Participants sometimes characterized policy formulation as the responsibility of high-ranking school leadership or the school district rather than a process they could readily initiate. Despite challenges associated with establishing and enacting protective policies, which typically resulted in the deprioritization of policy implementation, IRTs still found considerable success. For example, bolstered by IRT assistance, a student-led GSA fruitfully advocated for a district-wide policy supporting the use of chosen names in virtual classrooms during the COVID-19 pandemic. They also lobbied the school board to formally affirm LGBTQ+ student rights in response to proposed bans on transgender athletes in high school sports. An administrator noted that in taking these actions, "It puts the school board on record of saying we support all of these initiatives to really accept unified support [and] provide resources for our LGBTQ community." --- Discussion Our findings demonstrate the importance of understanding how power operates in and across outer and inner contexts to bound, shift, amplify, and otherwise shape the way new practices are received, implemented, and sustained. Heteronormativity and the structural stigma engendered by it are forms of dominant power exerted through institutions like schools, perpetuating adverse health outcomes for LGBTQ+ youth and constraining intervening actions. Stigma scholars point to structural stigma as the fundamental cause of population health inequities . On the individual level, structural stigma contributes to the psychological processes of minority stress through such mechanisms as experiences of discrimination or concerns about concealment and disclosure of identity . Institutional spaces like schools, as part of their function for producing heteronormative subjects, generates and sustains structural stigma that then impacts the health and wellbeing of LGBTQ+ young people. Heteronormativity represented a pervasive form of discursive power that shaped the perceptions, expectations, and practices of students and school staff-including their aspirations for change-as well as the norms, rules, and institutional structures in which they operated. In this way, heteronormativity and structural stigma influenced how epistemic, discursive, and material power functioned in schools, thus leading us to conceptualize the work of IRTs as an exercise in resistant power. In their work addressing LGBTQ+ equality in primary schools in the United Kingdom, education scholars Renee DePalma and Elizabeth Atkinson distinguish between antihomophobia work and counter-heteronormative work . Our study resonates with DePalma and Atkinson's critique that policy alone cannot instantiate deep, sustainable change. However, our study illustrates that policy is critical to conferring the power and confidence to engage in counter-heteronormative efforts. Strengthening policy can be a sustainable way to institutionalize LGBTQ+ affirmation, inclusivity, and protections, outlasting the involvement of any individual or group of individuals; thus, supporting on-the-ground actors in challenging critics who wish to prevent or eliminate LGBTQ+ supportive practices. Our findings have several implications for implementation science research and practice. Implementation science has traditionally noted the influence exerted by macro forces on inner contexts and the need to adapt implementation strategies in response . However, RLAS elucidates that power is also diffuse, fluid, and discursive. The pervasiveness of heteronormativity and structural stigma poses a challenge to implementation scientists by forcing us to think beyond the bounds of simple and discrete constructs. In many extant frameworks, heteronormativity would likely be categorized under constructs like "sociopolitical context" or "culture." In the EPIS framework, "sociopolitical context" is chiefly a part of the outer context and "culture" is relegated to the inner context . The Consolidated Framework for Implementation Research traditionally places culture at the organizational level of the inner context. It accounts for individuals' relationships to and attitudes about innovations but not necessarily their attitudes about the populations for which the intervention is meant to benefit . The integrated-Promoting Action on Research Implementation in Health Services framework recognizes the interconnected multilayered nature of inner and outer contexts. Yet it also conceptualizes culture as primarily within the inner context of organizational implementation sites . None of the models accurately reflect how heteronormative thinking can shape the discursive, epistemic, and material power involved in the RLAS implementation at every level and stage. For example, the acceptability, appropriateness, and feasibility of EIPs often hinged on how heteronormativity shaped participants' perceptions and, therefore, how they used epistemic and discursive power toward or against implementation. For many, it was not outright homophobia or transphobia that made it challenging to implement LGBTQ+ supportive practices, but the belief that schools were already doing enough to support all students or an assumption that specialized supportive intervention would be perceived as "special treatment ." Consequently, common implementation science constructs, such as appropriateness, acceptability, and feasibility, encompass more than individual attitudes; they also reflect the institution and wider social context in which interventions are to be implemented. The powerful influence of parents as bridging factors linking inner and outer contexts, accentuates the need to consider how such factors can positively and negatively influence implementation processes . Communities exert heteronormative disciplinary power. As described by Foucault, disciplinary power is the chief mechanism through which modern power systems bring subjects into line with dominant standards-in this case, heteronormative standards . In some instances, this power is exercised through overt discrimination. It is expressed as resistance to change in others. As bridging factors between communities and schools, parents leverage this power over their children and schools through disagreement with LGBTQ+ supportive innovations. Alternatively, they potentially positively influence implementation efforts by supporting interventions. Implementation science work on health equity offers a corrective to such overly simplistic conceptualizations by encouraging greater attention to how higher-level social determinants impact clinical encounters . For example, the Health Equity Implementation Framework conceptualizes "societal influence" as shaping context, recipient, and innovation factors and explicitly calls out insidious influences such as racial bias . Taking the recognition of structural causes further, the recent race-conscious adaptation of the CFIR problematizes "race-neutral" construct definitions to show the cross-construct operation of racism and racialization impacting implementation . Heteronormativity, even if it does not appear as overt homophobia and transphobia, operates similarly to how this model describes racism. Scholars in the field have explicitly called for implementation science to identify the root causes of health disparities, such as structural racism and other oppressive power dynamics, as critical to addressing barriers to implementation. These approaches stress the need for formative research to understand the influence of structural causes, the involvement of invested stakeholders, and multilevel and multicomponent strategies to mitigate the impacts of these root causes . The findings of this study vividly illustrate the importance of these efforts. As Foucault asserted, power's diffuse nature means that it is not held exclusively by any single person or group; its relational nature implies that both resistance and dominance operate within the same space . The role that RLAS played in cultivating resistant power in the top-down hierarchies of school environments further highlights a need to complicate understandings of power in implementation science contexts is. Our findings show that participants were often able to negotiate and incorporate extant power hierarchies in schools to work toward implementation goals. Many IRTs successfully recruited administrators and other power brokers in their schools to support their efforts, even when these efforts resisted heteronormativity and ensconced school operations. These findings underscore the usefulness of garnering leadership buyin and active support, which implementation science often highlights , and further emphasize the significance of leadership alignment when interventions are focused on marginalized populations . Our results also highlight the crucial role of education as a form of resistant discursive and epistemic power leading to the leveraging of material power . First, education on LGBTQ+ topics helps frame the necessity of EIPs, countering dominant narratives informed by heteronormativity that all students should be treated the same and establishing a knowledge base from which implementers can act. Professional development about the challenges LGBTQ+ youth face can garner sympathetic support from school staff and improve buy-in for implementation . Second, education can help highlight "subjugated knowledges" or other voices in school contexts that would normally not be valued. Subjugated knowledge or situated knowledge is highly contextual and local. This knowledge can be contrasted with the standardized knowledge circulating within disciplinary spaces like schools . For example, youth voices in our study were able to shift administrators' views about the safety and supportiveness of their schools. Implementation science has recognized the importance of education in the implementation process . Still, researchers should pay closer attention to how education can highlight situated knowledge to create resistant power . What participants know about their lives, experiences, and social worlds can directly contradict knowledge generated elsewhere, offering innovative solutions to difficult problems, exposing interventions or strategies that will not work in practice, or countering narratives that frame health problems in ways that do not align with reality. Participants were able to counter administrators who claimed that the school already supported "all students" equally. Strategies that validate the knowledge of people possessing intimate understanding of schools and people inhabiting LGBTQ+ identities can reverse the dynamics of dominant discursive and epistemic power that contribute to the erasure of their identities and experiences, obviating the need for action in the first place. Strategies that use this resistant discursive and epistemic power can also improve the chances of successful implementation through effective adaptation and tailoring. Relatedly, while implementation science has recognized the significance of champions in garnering buy-in for new interventions , our findings show that cultivating champions in implementation studies can offer participants a sense of empowerment and legitimization that helps them resist the constraints traditionally placed on their roles. Participants' sentiments expressing the empowering nature of being a part of this study suggest that the experience of leadership changed participants' perceptions of their power in schools. Some participants expanded their purview by proactively becoming knowledgeable and aware of what was going on beyond their immediate roles on campus. In so doing, they also gained confidence and a sense of satisfaction that they could make changes that mattered to students. By generating resistant discursive and epistemic power within implementation contexts through education, leadership alignment, and champions, IRTs were able to access the material power necessary for implementation. --- Conclusion Implementation scientists must consider the real and perceived power differentials affecting an organization's readiness to implement an intervention or an individual's Frontiers in Health Services frontiersin.org motivation to invest in implementation. Concepts such as readiness and motivation implicate individual, relational, structural, and broader contextual factors. Improving the likelihood of successful implementation requires recognizing these factors and addressing the full ecology of the implementation environment to avoid an overemphasis on individual capabilities and efforts and dismissing implementation challenges as the consequence of individual or team failings. Finally, in promoting efforts to improve health equity, implementation scientists must support implementers in leveraging resistant power to counter the institutional structures and social norms that perpetuate inequities, like heteronormativity. Implementation scientists and practitioners need to think beyond the fit of interventions with contexts to consider the productive nature of interventions that challenge and disrupt-or otherwise do not fit institutional processes. The challenges facing such efforts are formidable in environments shaped by hierarchical governance structures that control material power, such as schools. In these contexts, deploying strategies that generate and leverage resistant discursive and epistemic power may be key to obtaining material power for resistant purposes. Strategies like cultivating champions, education and training, building capacity, aligning with leadership, and enacting policy are practical ways to bolster resistant power in schools to support and sustain EIPs. --- Data availability statement The data that support the findings of this study are available from the study Principal Investigator upon reasonable request. --- Ethics statement The studies involving human participants were reviewed and approved by the Pacific Institute for Research and Evaluation Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. --- --- --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
and Willging CE ( ) Power and resistance in schools: Implementing institutional change to promote health equity for sexual and gender minority youth.
identify the computational feasibility of addressing many of the challenges of traditional GBV monitoring. We review the challenges of traditional monitoring to provide a benchmark for assessing the success of our approach. Gathering statistics on GBV episodes is time consuming, collected under non-standardized protocols, and published in highly aggregated form. For example, the non-partner sexual violence prevalence data published by the World Health Organization6 is dated 2010 and reported by 21 aggregated geographic regions such as West Africa and South Asia. While the United Nations Office on Drugs and Crime data7 are less aggregated, and somewhat more recent with some data available as recently as 2012 and separated by country, the data are sparse and incomplete. For instance, sexual violence data are available for the Philippines and Nigeria for 2012, but the most recent data for India are from 2010 and no data are presented for South Africa. The lag itself prevents monitoring change, either to detect unexpected increases in GBV, change in attitude possibly due to recent events and mitigation efforts. Moreover, the conventionally available statistics reported above reflect legal definitions, making direct comparisons among countries impossible due to differences in the definition and recording of offenses. Table 1. Tweet samples from our analysis with implications to inform and design targeted GBV intervention campaigns. M1-M3 illustrate existing automatic analysis capability, while M4-M8 result from partially automated analyses presenting the case for expanding computational methods. --- Message Implications M1: RT @USER1: 1 in 3 women are raped/abused in their lifetime. RT if you rise to stop the violence. #1billionrising http://t.co/lXEEmQoLbO Volumetric analysis of social media can help measure population engagement and effective penetration of designed campaigns in the community M2: #StopRape Rape Crisis says many survivors of sexual abuse and assault still don't feel confident in the criminal justice system. CS Location analysis permits the identification of y message origin and routing to appropriate agencies M3: @USER2 Takes a MOMENT 2 Sign & ask Others 2,so DV & Rape Laws become Equal. TOGETHER we can Change History: http://t.co/ylWtl5PgCI Gender detection analysis of message authors supports adjustment of apparent population GBV attitudes by gender and suggests the content of anti-GBV policy and campaigns M4: RT @USER3: Rape prevention nail polish sounds like a great idea but Iím not sure how you're going to get men to wear it Content analysis sensitive to sociocultural considerations permits the assessment of subtle measures such as the role of humor, sarcasm and despair M5: RT @USER4: 15 yard penalty for "unnecessary rape" http://t.co/yhzxtYzGP0 Metaphor analysis indicates the acceptance of GBV, echoed for example in sports, and suggesting opportunities for specific anti-GBV campaigns M6: Valentine's day is really helping me sell Entity recognition based on knowledge-bases of GBV these date rape drugs entities can help identify precipitating events , apriori to design preventive campaigns M7: @USER5 WB Govt must have ordered Police to protect the family of Rape-Victim. It is shameful for Mamata Banerjee O GOD GIVE WISDOM TO ALL Organization detection, including government, law enforcement and commercial entities in relation to GBV can inform policy, e.g., M7 informs potential lack of police protection for a victim's family in West Bengal M8: RT @USER6: It is not my job to coddle and "educate" young Black men when it comes to violence against women. Y'all wanna "teach"? Modeling of stereotypical association can inform design of targeted campaigns, e.g., M8 author is stereotyping GBV violence with black men Note. We anonymized user mentions as per the IRB guidelines. Apart from the logistical problems of gathering GBV data, the data have validity limitations. Reliance on formal reports to law enforcement risks under-reporting by victims and witnesses who may believe that domestic violence is a private matter . Aggregation/generalization across localities with different socio-economic properties masks important trends because sociocultural context including politics, history, religion, and economy strongly influence attitudes . Researchers suggest the need for different prediction and mitigation models for different sociocultural contexts . Social science survey research methods complement the law enforcement data.. However, social science survey data also has several limitations. Surveys reflect sampling biases and various confounds . We highlight four limitations of survey data of particular relevance to Computational Social Science methods. First, survey items tend to address attitudes but not behavior, and therefore bear unclear relationship to the rate of GBV episodes . In contrast, inasmuch as verbal abuse constitutes a form of violence, social media posts can provide actual instances of this behavior. Second, the methods fail to account for transient global events, such as political or celebrity activity that can influence views and responses, hindering comparison over time. Third, the items themselves presume an established theory and standard measures of GBV, limiting the opportunity to discover latent patterns that reflect attitude and behavior. For example, metaphor is a powerful reflection of public opinion , but to our knowledge has not been explored in survey measures. Fourth, survey methods constitute a highly labor intensive data collection method, affording small samples while imposing cost and lag in data availability in a dynamic world . We will return to these issues when we evaluate our own approach. Social media promises a faster, cheaper, and face-valid means to engage the public, providing unprecedented large-scale access to public views and behavior . They provide an ability to monitor attitudes in near real-time and to support timely measurement and mitigation efforts. However, while use of social media provides speed, participation, and cost advantages, the absence of controlled sampling necessitates adjustments for demographics including literacy in internet infrastructure. Furthermore, we require region specific models to accommodate sociocultural differences in the definition and expression of GBV issues. Ultimately, all three resources require integration in order to assist policy design, prioritize attention for interventions, and design region-specific programs to curb GBV. A logical first step is to demonstrate the potential of social media for GBV monitoring and the design of mitigation and policy. --- Study Design Overview Based on the suggestions of our UNFPA collaborators to identify a GBV-related corpus, we selected three major themes that encompass gender violence concerns: physical violence, sexual violence, and harmful sociocultural practices. Corresponding to these three themes, we created a seed set of keywords for data crawling from the Twitter Streaming API. We also selected four countries with suspected elevations in GBV suggested by UNFPA experts: India, Nigeria, the Philippines, and South Africa, in addition to the U.S., which had been subjected to preliminary analysis in Fulper 2014. Below, we examine tweeting practices by geography, time, gender, and events. The five countries present different contexts both for understanding social media data pertaining to GBV and for mitigation efforts. Figure 1 summarizes their variability across some key contextual dimensions: the education gap between genders, the penetration of the Internet, and overall literacy rates. The figure illustrates the clustering of Nigeria and India for lower literacy rates, a greater education gap, and lower Internet penetration. South Africa and the Philippines cluster with the U.S. regarding overall literacy and the reduced education gap, but reflect a diverging range of Internet penetration. The graphs suggest the risk of sampling bias that affects data interpretation: an illiterate female citizen with no access to the Internet may not be providing social media data, biasing the aggregated measures of attitude. Other differences not presented may also be relevant. For example, India is among the top 20 of over 140 countries regarding female political empowerment while Nigeria is below average on this dimension. Additional influences on the use of social media not depicted in the figure include cultural influences on free speech. For example, Nigerians may avoid public conversation about the Boko Haram atrocities8 due to fear of revenge. In the next section, we employ quantitative and qualitative analyses to examine Twitter content related to GBV. Twitter supports the distribution of short messages called tweets that are a maximum 140 characters in length. The character limit influences message style and constrains communication practices. Therefore, tweets often contain URL links to web pages or blogs, sometimes relying on shortened URL versions from external services . A hashtag convention supports the identification of searchable user-defined topics. Other Twitter engagement features include retweet . The electronic device used to post a tweet may provide accessible, precise location indicators in some cases. Alternatively, accessible user profiles provide more general indicators of location and sometimes gender indicators such as author name. A corpus of social data collected over a period of ten months included nearly fourteen million GBV tweets. In this corpus, we examine volume, location, trends over time, gender participation, and content such as metaphor and humor. Our analyses present both challenge and opportunity to study the phenomenon of gender-based violence. Challenges concern the need for computational methods to discern public perception and attitude from complex contextualized behavior. Opportunity lies in gaining fine-grained, region-specific insights concerning the prevailing GBV attitudes and related policies along with potential approaches to mitigation. --- [2.] METHOD Below describe our data collection, followed by a description of the analysis approach. --- Data Collection Based on domain expert guidance we collected data from the Twitter Streaming API , using its 'filter/track' method for the given set of keywords. We used the keyword-based crawlers of our Twitris platform , where the Twitter Streaming API provides the relevant tweets for a given set of keywords. The crawling method contained processing for: calls to the filter/track method of the API for data collection, extraction of relevant metadata and database storage. Consultation with UNFPA domain experts informed the required keyword lexicon for input to the data crawlers. These experts assisted in the definitions and associated terminology corresponding to the three themes of interest for GBV study: physical violence, sexual violence, and harmful practices . For each single keyword K, Twitter provided messages containing any form of the keyword #K, k, K. For the multi-word phrase K, the service provided messages that contained all of the terms in K, regardless of order. Each message is returned with associated metadata, containing various tweet-related and author profile-related characteristics, such as tweet origin location , time of posting, author profile description and location, number of author followers , and followees . Location metadata was crucial to geographical analysis. We first checked if tweet origin latitude-longitude were available from the device used to send the tweet. Otherwise we resolved the author profile location , if available, using calls to the Google Maps API service. Author profile location supplements the sparse device origin location metadata, apparent in the pilot analyses. Author profile location provides an explicit indication of nationality of particular interest to the analysis of region-specific GBV behavior and attitude. We used a bounding box of latitude-longitude for a country of interest to identify a country-specific tweet dataset. Using Genderize API , we collected genders for tweet authors. We first fetched the real names of the twitter authors using metadata for their Twitter handles. We extracted first names to detect author genders via calls to the Genderize API with first names as parameters. --- Analysis Approach With UNFPA guidance, we analyzed the data corpus of 13.9 million tweets over the ten months in non-uniform time slices, starting with a smaller pilot phase and adding two additional extended data collection periods due to encouraging results from the pilot data. To study the diverse set of data from the three phases, we employed mixed methods to reveal both patterns across the corpus as well as the content of specific contributions. The focus of quantitative analysis is to provide data-driven insights of activity patterns in the social media community by examining large-scale distributions of GBV content by geography, time, and gender. However, the recovery of the meaning of a pattern requires more fine-grained analysis than mere statistical distribution. Therefore, the focus of our qualitative analysis is to reveal attitudes and behaviors across different countries and between genders. In both cases, our interpretation relies on context, regarding current events and sociocultural considerations, ultimately supporting the need for context-sensitive computation for monitoring GBV content in social media. --- [3.] RESULTS --- Quantitative Analysis We discuss four types of quantitative analyses in this section with respect to volume, theme, information sharing, and gender. Volume analysis. To begin our study, we sampled an initial slice of data for 1.5 months from all over the world, which contained 2.3 million tweets worldwide related to GBV. For brevity, we skip the broad descriptive statistics regarding this sample and instead summarize some key observations. Regarding our interest in location-specific GBV data, we note relatively few tweets with device-based location information . However, author profiles provide location related information as well. Overall, more than half of the data for 1.3 million users had location information. The demonstrated feasibility of data collection motivated our more extensive data collection effort, spanning an additional 8.5 months. More than 11% of 13.9 million tweets belonging to the five countries were examined here for volume analysis by country. Table 3 provides the composition of the full data set by country. We note more than five times the traffic in the U.S. and India relative to the other countries. Moreover, the observed frequency ranking differs from the population demographic9 information for these countries . We suspect that Internet penetration is influencing data collection. Due to these scale differences, a simple frequency graph for all of the raw data over time would mask variability in the Philippines, Nigeria, and South Africa with smaller populations and variable Internet penetrability. Figure 2, therefore, scales the raw data by two factors: population and Internet penetration. This allows us to consider the relative prevalence of GBV topics between countries and over time. Below we discuss some of the emergent patterns by country. We highlight the need to interpret these patterns with respect to a broad and complex knowledge base that includes current events, precisely the sorts of analyses that are best accomplished with computational tools. Figure 2 appears to suggest that South Africa generates the most GBV related tweets overall, with the U.S. and India lagging behind roughly the same amount, and with Nigeria and the Philippines toward the bottom. However, we note that this cross-country pattern is sensitive to the form of adjustments made for population and Internet penetration. Trends over time within a country, however, are not sensitive to such adjustments and are equally, if not more important. We make this point to illustrate the need to consider sociocultural influences on social media traffic. Several current events may explain the apparent peaks over time, revealed during a parallel Google Trends search by country. In the Philippines, the ongoing saga of Vhong Navarro which dominated much of the discussion10 . Navarro, a television personality, was assaulted in his home in January of 2014, apparently in retaliation for attempted rape. The incident stirred concerns over rape culture, as many questioned the motives of the female accuser. Twitter traffic reflected these issues. However, the American movie star Jennifer Lawrence also factored highly in the most searched items in the Philippines regarding the hacking of her private phone account. Between March and July a vehemently anti-LGBT figure named Myles Munroe ranked among the most-searched topics in Nigeria11 . The largest Nigerian event from the United States perspective factored only ninth in Nigeria. "Kidnapping" searches initially increased approximately four-fold in the month and a half following this incident, but then dropped to previous levels. Given the responsible group's influence and tactics, we suspect citizens may have been reticent to discuss any wrongdoing openly via social media. South Africa's Google searches reflected considerable violence12 . Among the top events driving traffic was the murder of a prominent soccer player in October during a burglary at his home. Oscar Pistorius also influenced the search patterns. The trial for the murder of his girlfriend began on March 3 and temporarily adjourned on May 20. The trial resumed on June 30 and lasted through August 8, with a judgment on September 12 and sentencing on October 21. At least two of the peaks in GBV-related Twitter traffic are coincident with these events. These observations suggest that any quantitative analysis of social media traffic must control for current events, in order to separate fundamental trends from local variability. Furthermore, the patterns suggest that we will require separate models of GBV in social media, by country. Such control is well within the capability of computational analysis using topicmodeling approaches. We note apparent correlations between certain countries: India and the U.S., as well as the Philippines and Nigeria at least for the bulk of the data. We observed a 48.1% overlap in the popular topics of the U.S. within the set for India for the month of August. Topics to determine overlap were based on the top 500 key-phrases extracted using tf-idf based method . Such content overlaps suggest the presence of an underlying, latent variable responsible for the observed relationship. Demographic properties of the countries investigated suggest the identity of this variable. We note a large Indian diaspora living in the U.S., potentially reacting to events originating in India. For example, the story of a 15 year old Indian female acid attack victim received considerable attention following an Al Jazeera report in January . The shared August peak coincides with an Independence Day speech by the Prime Minister of India in which he urged suspension of the practice of questioning the families of girls regarding their social habits but never boys. Tweets with apparent origins in the U.S. may actually constitute an amplification of tweets originating in India, as the following tweet illustrates. Note, however, that the hashtags at the end are different, demonstrating the role of user engagement and amplification regarding conventional media. India: What world calls shining #India is the worst place for women in terms of #Rape http://t.co/F2oyKps60R #BanBollywood #MediaMafia US: What world calls shining #India is the worst place for women in terms of #Rape http://t.co/zZoB17p1L1 #BanBollywood #PakMediaHijacked The U.S. and Indian patterns illustrate an important challenge in the interpretation of social media data-all trends do not necessarily reflect local events. Computational tools for monitoring GBV will need to conduct location analysis from the text to distinguish commentary about other countries. Model building can accommodate this distinction. However, more fundamentally, given the tweet pedigree, it is not clear to which country we should attribute the attitude. Theme analysis. Our original data collection categorized the tweets into three thematic groups: physical violence, sexual violence, and harmful practices. We used a rule-based classification approach where a tweet was considered relevant to a specific content theme if it contained any sequentially ordered pattern based on the domain-expert provided keyphrases for that theme . Sexual violence dominates the sample of GBV tweets . We provide the distribution of sexual violence sample over time for the five countries in our study in Table 4. We note a general decrease over time with some variability between countries. As suggested above, we suspect early spikes related to incidents like the Vhong Navarro rape accusation in the Philippines and the initial publicity surrounding the Boko Haram atrocities in Nigeria. Sharing behavior analysis. Social media provides the opportunity to distribute information, potentially reflecting both the senders' judgment of information importance, and reliance on the voice of others. Sharing functions to amplify these voices, often the voices of influential celebrities. We analyze two types of sharing behavior in the social media community surrounding GBV events: direct content resharing as a retweet , and indirect sharing via references to external resources, such as news, blogs, articles, and multimedia, using a URL, etc. We calculated the percentage of GBV retweets relative to the total count of tweets for each data sample as shown in Table 5. More than 40% of the GBV corpus is a retweet in the US, India, the Philippines, and South Africa, amplifying information that senders consider to be important. For comparison, Liu, Kliman-Silver, and Mislove found that retweets generally constitute just over 25% of the total volume of tweets. Although we note variability in retweet behavior between countries, the low retweeting frequency in Nigeria is particularly remarkable . One might hypothesize that a low literacy country such as Nigeria, in which senders are less able to compose messages, would have the highest retweet ratio. The adjacent analysis of the proportion of URL references with respect to the total corpus suggests a different sociocultural phenomenon at work concerning the identifiability of the responsible party. For GBV tweets containing URLs, Nigeria has the highest percentage of tweets with URLs in comparison to other countries. Numerous explanations can be tested, including literacy, credibility of the public press, and the possibility that reliance on external resources somehow reduces the threat of being identified as the responsible party. Author gender analysis. We obtained gender identification for 37% of the users affiliated with the target countries . The reduced percentage is due to names missing in the Genderize API lexicon, as well as unconstrained natural language features of social media content, such as use of special characters in names, for instance, '@@shish' instead of 'Aashish', which is a male Indian name. *Filtered = where an author gender could be determined Keeping in mind that statistically significant differences are certain with a large sample dataset, the distribution of gender appears approximately equal in Table 6. We also note a corpus tweet frequency average for a female author as 2.352 tweets per author, while 2.472 tweets per author for a male. However, based on the name classification procedure that we employed, Figure 3 separates the gender distribution for the examined countries, and creates an impression of gender inequality for the GBV content corpus. We note a discrepancy with Pew Research Center findings suggesting equal participation between genders in the United States. While GBV activists might be hiding their identities in developing countries such as Nigeria, this is an unlikely explanation for the observed US gender effect regarding the prevalence of GBV content. Literacy serves as a partial explanation for the observed ratios, except for the U.S. Apart from the explanation, opinions collected in the U.S., India, and Nigeria reflect a male bias, while opinions collected from the Philippines and South Africa are more balanced or even reflecting slight female bias. These observations have implications for the assessment of GBV attitudes, the general reach of anti-GBV campaigns using social media and the ability to target potential perpetrators and activists for engagement separately. --- Qualitative Content Analysis Thus far we have described the corpus with respect to dimensions that we anticipated at data collection: country, theme of GBV event, sharing methods, and gender. We have made suggestions about the surrounding context, including transient events that might explain the observed patterns. In the remaining analyses, we look more closely at message content for indications of GBV attitudes and behavior to clarify the requirements for future computational analysis capabilities. --- Language indicators. Using Linguistic Inquiry Word Count software13 , we analyzed language of the content of all tweets generated by both genders. We used the predefined LIWC dictionaries that tally word frequencies in categories such as anger, sexuality, sadness, health, etc. Content corresponding to these particular categories, and in fact content across the majority of the standard LIWC categories appeared more frequently in tweets of male origin relative to tweets of female origin . However, we did note some LIWC categories in which females trended higher . As these are the unusual instances, they are the ones on which we will focus, leaving the potential category by country by gender interactions for future study. Consistent with the research on gender issues in communication , female authors here are more collective and socially oriented. Their tweets call for action and are more likely to express or solicit agreement. Female : @USER7 Absolutely. If we follow each other I can DM you my email address. I applaud your speaking out on the rape epidemic in SA. Female : I am worried abt our approach to d fight against rape. Permit me to vent here. @USER8 @USER9 @USER10 @USER11 #CurbingRape Female authors are also more likely to provide opinion on causality, as in the following example tweet from India: Female : The major factor behind #Rape in #India is the #Bollywood which incites feeling 2 cross the moral limits #BanBollywood #GreaterPakistan Gender-specific analysis can be leveraged to design and promote anti-GBV campaigns. For example, tweets of female origin in India, although not guaranteed to be benevolent, could be amplified to extend their reach. Following a manual review of a random sample of the corpus, we used computational analyses to describe the prevalence of attitude indicators across the whole corpus. Two of the attitude indicators examined here are the presence of humor indicators and GBV metaphors in sports. We also provide specific examples of tweet content from manual analysis of a random sample. Humor indicators. Humor and related sarcasm might indicate a trivialization of an issue or an expression of underlying helplessness . We assessed the prevalence of humor references with related permutations of "haha" and "hehe". The number of such humor-flagged tweets by country appears in Table 7. The Philippines sample provides a greater proportion of such humor indicators by far. This is consistent with Filipino culture14 in general. "Haha" and "ha" may play a different role than "He he", where the former constructions appear in conjunction with question marks and suggest sarcasm, while the later may not have these properties and function more like true laughter. For example, the following tweet from the Philippines questions the veracity of the plaintiff, pointing to an unrealistic timeline: : RT @USER12: Panong mararape si roxanne cabaÒero kung nasa concert si vhong with vice? Haha! Tangina throwback rape na nga long distanÖ. The following tweet with its "Hehehe" notes that the slender appearance of the subject in an old photo creates the impression of a rape victim: : Throwback!!! Nung ako'y naging isang payatot na rape victim. Hehehe http://t.co/va4qQStQcm Certainly, not all sarcasm will be marked with punctuation and laughter icons, as in the following example: : RT @USER13: Rape prevention nail polish sounds like a great idea but Iím not sure how youíre going to get men to wear it Presenting similar challenge to computational identification, the following example suggests mock pride in the Philippines as a record holder for the world's fastest rape, but without the punctuation and laughter indicators. : @User14 #PROUDPinoyKasi hawak pa rin ng Pilipinas ang Guinness Record para sa pinakamabilis na rape! Woohh! Thanks Deniece! NakakaprouÖ Such constructions are not as compatible with screening searches based on lexical items and punctuation. We interpret the lexical markers as potentially correlated indicators for jokes that would be much more difficult to identify, not only with respect to semantic processing but also cultural nuance. Furthermore, while real semantic differences may distinguish the usage of "ha" and "he" constructions, including nuances related to length, we need not make such distinctions in order to call attention to the prevalence of these constructions and their general implications for GBV attitudes and behavior, regarding the veracity of the plaintiff or the trivialization of rape. Nevertheless, due to cultural differences in the role and expression of humor, we cannot advocate direct comparison between countries. Our keyword approach may underestimate the prevalence of humor outside of the Philippines. Moreover, we suspect the need for different humor indicators and models between countries. However, we do suggest that the prevalence of GBV humor may provide a useful metric for changes in attitude over time within a region. The observed female bias in the Filipino sample reinforces the interpretation of humor as an expression of helplessness. : @User15 no place is /THAT/ safe and I don't really want to take the risk especially when I'm alone :((( rape and robbery are very" In support of this interpretation, we note the prevalence of the following construction: "baka ma rape", meaning "you might get raped". : #XXXXXX mag ingat ka huh? Wag ka masyado mag pagabi! Baka ma rape! Sige ka! -.-We searched on this construction in the Philippines data as well as its English counterpart "you might get raped" in the other countries with the following results. We identified 717 tweets originating in the Philippines warning of the rape potential. Similar language exclusively in English uncovered tweet frequencies ranging from a high of 74 in the US to a low of 46 in South Africa. The Philippines, with less than one third of the population of the US generated nearly ten times more warnings of potential rape. This finding reinforces the need for a context and culturespecific models to study GBV behavior. GBV and sports metaphors. Sports involve competition, and violent metaphor is a common device in sport discussion ). GBV metaphors may appear in a sports corpus as an indication of dominance, such as the following example from the South African tweet corpus: South Africa: The German Team is on a Steroid-induced anal-rape rage against Brazil right now. Comparing the prevalence of GBV metaphors in sports across countries is difficult because a given sport does not capture national attention equally across countries. We obtain some insight regarding the GBV metaphor in tweets related to sports from exchanges concerning the FIFA World Cup contest, held from 12 June through 13 July 2014. Soccer is the most popular sport in Nigeria and South Africa, one of the top three favorite sports in India, and a top ten favorite in the Philippines and the U.S. 15 . While all of the countries we examined have eligible teams, only the U.S. and Nigeria participated in the final tournament 16 . We should not expect any FIFA-related content in a corpus designed to capture GBV issues. Yet, Table 8 illustrates the existence of tweets flagged for containing references to: football, futbol, soccer, worldcup, world cup, fifa, fifacup, and soccercup, as well as pairs of team names participating in the tournament. Although the percentages are small, every country examined provided such instances in a ranking consistent with soccer popularity, with the exception of the U.S. The finding requires a nuanced interpretation. On the one hand, rape as a metaphor suggests a trivialization of the primary definition. On the other hand, at least one dictionary17 indicates an archaic definition meaning plunder or violation, for example regarding the environment. On the one hand violent metaphor is a common device in sport discussion. On the other hand, it is just one of many common metaphors . In fact, in Lewandowski's ) extensive analysis of violent, conflict-oriented metaphor in football and in sports journalism, none of the 551 violent metaphors invoked rape. A difference in the popularity of sports between countries is only one challenge to direct comparison of GBV metaphor between countries. The development of a knowledge base to support analysis poses further challenge since different regions follow different teams with different players. As in the interpretation of humor, we believe the best use of sports metaphor data is to provide indicators of attitude trends over time by region, while adjusting for seasonal variations in sporting events or averaged annually. Policy-making and intervention insights via manual analysis. Our goal is to generate attitude metrics and inform mitigation campaigns automatically. Indeed, computational analysis informed all of the above examples. In contrast, for the following examples, we manually examined random subsets drawn from each of the three slices of our data corpus, as 200, 500, and 500 tweets. We end our presentation of GBV data by indicating the kind of content available in the corpus compatible with somewhat more specialized computational analysis. Developing the necessary content-specific filtering methods is particularly critical for appropriately routing content to specific policy making recommendation agencies. Some examples follow below: a.) Behavior pertaining to government/public officials/leaders • @USER16 NCP leader doesn't know rape happens due to pervert mindset . what abt unreported rapes happen with minors & toddlers. b.) Commercial considerations • RT @USER17: .@USER18 Please don't allow violent hatemongers use your app to harass and exploit marginalized women. \"http://t.co/DoRkbXFuN… • Spring Breakers isn?t just a terrible movie, it reinforces rape culture http://t.co/pxTAZftM8v • RJ Police already said that it's not a Rape case!But Media neglected it 4 creating SPICY news!#KnowTheTruth & Rise4Justice c.) Persuasive/encouraging message content • RT @USER19: Pregnancy, periods, breast cancer, being walked on, rape, harassment, abuse; females go through a lot. WOMEN ARE STRONG. • @USER20 @USER21 Is it acceptable to use gang rape in an example? d.) Stereotype association • RT @USER22: It is not my job to coddle and "educate" young Black men when it comes to violence against women. Y'all wanna "teach"? Automated filtering of GBV content according to such dimensions is challenging but feasible given the appropriate knowledge bases. --- [4.] DISCUSSION AND FUTURE WORK Here we revisit the limitations of conventional, survey-based methods for monitoring GBV attitudes to discuss the progress we have made, the limitations we face, and the remarkable feasibility of further advances in Computational Social Science to address the persisting limitations. --- Progress We noted above the limitations of conventional methods in gathering GBV data for specific regions. Data are either highly aggregated at the country level or missing. We have provided substantial amounts of GBV data for all five of our target regions. User profile location and tweet origin location metadata provide the capability of identifying content specific to much smaller units of analysis, e.g., type of socio-economic region or even city . This capability supports targeted GBV campaigns, where the prevalence and types of violations may vary. Below we note progress with respect to a number of additional concerns regarding conventional data gathering methods. Reducing sampling bias. We noted the presence of bias in survey methods due to artificial sample selection. We have overcome some of the bias concern simply by the scope of data collection that is feasible with social media. We have the opportunity to observe male and female attitudes by specific regions, over time. However, bias does remain in our data collection methods. Some of the persisting bias is amenable to adjustment. For example, bias exists in the form of literacy assumptions by country and by gender within country and Internet penetrability. Awareness of these sampling issues allows us to amplify the weight of content from underrepresented participants, e.g., the contributions from females in lower penetration areas. This is possible because we can assess both gender and location in the available data. Of course, we cannot amplify content that does not exist. This limitation is fundamental, but it also identifies the regions where this limitation occurs and where we should focus other methods of data collection. We do acknowledge the existence of automated bot user accounts on Twitter that could have affected our data collection. Although Twitter has been actively working on controlling spam accounts, and given the sensitivity of GBV topic, we suspect lesser contribution from such bot accounts in the GBV related content. Future work is required to filter bot-generated content in our datasets. Reducing content bias. We suggested that survey participation itself constitutes a form of bias. Apart from the sampling issues noted above, the posted content on social media itself reflects a form of bias. Participants are still providing what they consider to be socially acceptable content. This is particularly apparent in the absence of commentary regarding Boko Haram and GBV in Nigeria. However, this observation alone can inform the course of action for policy makers, e.g., publicizing sensitive topics. Nevertheless, the available social media are not responses to external queries about attitudes. Instead, these express attitudes directly, making them susceptible to analysis. Analysis speed. We have over 1.5 million social media postings collected in 2014 from just five countries to support the claim that GBV issues generate commentary. Moreover, we have completely eliminated the need for labor-intensive data collection, and as a result have overcome the cost and lag limitations in data collection. We produced analysis within a year in contrast to survey methods, despite the absence of complete automated capability. This rapid turnaround enables the development of dynamic metrics to assess the results of campaigns designed to curb GBV. A rapid measurement capability can play a role in the promotion of effective efforts and the abandonment of those that are less effective, with real consequence to the alleviation of human suffering. GBV attitude and behavior metrics. Survey items tend to address attitudes but not behavior. Social media data provide both attitudes and behavior, inasmuch as jokes and metaphor are both behavior and attitude. This provides us with potential measures of tolerance for GBV. Thus, the editing of socially acceptable content that constitutes a form of bias in data collection is the very same behavior that tells us what is considered acceptable. This provides a means to measure the effectiveness of anti-GBV campaigns, both those directly targeted by the potentially offensive jokes and metaphor as well as those targeted by specific but apparently unrelated concerns such as the effect of holidays or weather. Survey methods are, by design, static. Standard measures purport to provide evidence that is comparable across time and regions. However, standardization ignores the effect of context. Social media trends over time tell us that context cannot be ignored. For example the publicity surrounding a celebrity involvement in a GBV-related event spikes social media commentary. The availability of such events may very well influence survey responses. But standalone surveys have no way to account for this influence. Our computational social science methods allow us to complement the interpretation of GBV commentary with adjustments for the influence of events at a short-time scale in order to discern long-term trends. Finally, and completely outside the typical survey content, the analysis of social media also promises to assist GBV campaigns, both by targeting the views of specific groups and by providing content recommendations regarding law enforcement, politics, health services, and commerce. --- Limitations & Challenges All data collection methods suffer from limitations, and ours is no exception. Here we note several concerns, indicating which are amenable to computational solution and which are more fundamental to all data collection and interpretation methods. Unconstrained natural language text. Our keyword-based crawling limits the completeness of the resulting corpus. Given the natural language of social media messages, we cannot guarantee collection of every single relevant message. Moreover, keyword selection matters. Countries vary the terminology they employ for different purposes. For example the word "rape" in tweets that originate in India generally refer to events in India, but "sexual assault" in the Indian corpus returns mostly American incidents. We are further constrained at present by restriction to the English language. The common word 'rape' in Tagalog and English, along with laughter indicators enabled analysis of the Philippines database. Furthermore, our dependence on keywords glosses over the different definitions of rape across cultures. Global event sensitivity. Both a feature and a limitation, we demonstrated the influence of events on social media content, both within a region and between regions. World events provoke the articulation of public opinion and provide an unprecedented large-scale opportunity to gather opinion and attitude. At the same time world events create variations in magnitude that require adjustments to frequency counts, in order to reflect enduring trends over time. Inter-regional comparison. We commented earlier that differences in the legal definition of GBV hindered comparison of GBV rates between countries. While we believe our measurements within a region can be informative about attitude change over time, the ability to compare GBV issues between regions still poses substantial challenge. Our two proposed measures, sports metaphors and jokes illustrate the challenge. A given sport is not equally important across countries , so that the prevalence of more frequent violent sport metaphor in one country relative to another may simply reflect population interest in the sport. Allowing type of sport to vary between regions confounds sport with region, so that we might be learning more about issues with the sport than the region. The meaning of jokes notwithstanding , we cannot compare the prevalence of GBV jokes between countries without factoring in the prevalence of jokes between countries in general. Thus, we have not escaped the local, sociocultural influences on measures that prevent between region comparisons. Our contribution to this issue is to establish that it is not a limitation of specific measures such as police reports and surveys. All measures reflect these sociocultural influences. Location identification. Location references of the messages are not always consistent with the location of the author profile or GPS coordinates of the source device. U.S. events appear in the opinions of people in other countries, and vice versa. The technical aspect of this problem resolves with techniques that discern event location from the message text or ancillary URL content. This is not necessarily without remaining uncertainty, as people often assume shared context location identification, e.g., with abbreviated names for familiar landmarks resulting in referential ambiguity. However a far greater concern lies in the attribution of attitude when the content corresponds to a remote event. Correlational logic. Our argument has a correlational component. For example, we can identify indicators of humor such as "hehehe" and "hahaha" using computational methods. However, automatically detecting every instance of a joke embedded in social media content is not computationally feasible. We assume that the frequency of humor keywords in a GBV context correlates with the frequency of GBV jokes without such keywords. Although this assumption is worth confirming using a manual classification of jokes, we note that the presence of laughter indicators alone in this context is potentially offensive. --- Future Work We demonstrated the potential of social media to inform policy makers regarding attitudes and behavior, measuring the effect of campaigns, and even providing campaign concepts. However, much technical, theoretical, and practical work remains to realize the full potential of the medium for the GBV application. Technical advances are required in order to distinguish message origins from message content and support more comprehensive gender detection. Substantial work remains in the development of specific knowledge bases to guide the detection and interpretation of jokes as well as metaphor and commentary directed to particular entities such as government and business. The knowledge bases must be dynamic to capture the transient events that mask fundamental attitude. Theoretical guidance from social science is required to attribute humor and metaphor to either despair or tolerance or the lack thereof. The interpretation of retweets across regions raises the problem of attribution in more than a practical sense. The issue is whether and how to weigh the endorsement: as a property of the original sender, or the endorser who is amplifying the message. Does the endorsement reflect opinion of the endorser's location independent of birth origins? Comparison of GBV threats between countries, though raising measurement issues, is not an exclusively methodological problem. Instead, it is a sociocultural and political issue, concerning the articulation of globally established norms to determine the deployment of global resources. We lack the policy expertise to weigh in on such matters. However, we do endorse the development of adaptive regional, and even local models for GBV behaviors and attitudes. Although a large model encompassing all countries holds a certain appeal, it is fallacious to assume that all regions have the same underlying issues and beliefs. Local socio-economic conditions, literacy, Internet penetrability, gender, crime rates, religious beliefs, liberty, and many other unexamined but constant factors influenced the data and their interpretation. Data accessibility for policy-makers. Near-real-time information benefits an urgent need to reduce GBV and the associated suffering. Conventional methods of disseminating public opinion are reflected in written reports covering multiple years, and issued with considerable delay following data collection. However, governments, NGOs, International Organizations, Aid Workers, etc. require far higher bandwidth access to the dynamic data and analysis in order to measure current public opinion and the effect of anti-GBV campaigns. The Twitris collective social intelligence platform provides a foundation for delivering the necessary high bandwidth access. Twitris supports the presentation of thematic data along spatial and temporal dimensions , network relationships among people related to the distribution of content , and sentiment as well as real-time trends18 shown in Figure 4. 5), country-specific demographics including top topics and related tweets , sentiment-based heat map , as well as fine-grained analysis of emotion expressed, who-talks-to-whom network for influential users in GBV topics, and a real-time trends dashboard . Twitris analyzes a topic such as GBV, and provides real-time, scalable analyses of social data streams for greater insights and actionable information to improve intervention. For example, Twitris can monitor the real-time public sentiment and emotional reaction to criminal reports and justice system response by region, permitting side-by-side comparison with other events. Under user guidance, Twitris automatically identifies the key topics meriting further attention. Twitris' network analysis assists in the measurement of anti-GBV campaign diffusion to measure campaign effectiveness, as well as the identification of users who will spread targeted campaigns. --- CONCLUSION The study presents the case for the advancements in Computational Social Science to inform GBV policy design and anti-GBV campaigns. Big social data complement more controlled but slower survey-based data collection and analysis methods, whose conclusions may become obsolete in a dynamic world that continuously generates noisy data responding to transient events. The lag in surveys and the noisy nature of data limits the use of conventional methods for measuring the effects of changing GBV attitudes and anti-GBV campaigns. Computational Social Science supports the collection and analysis of a large GBV corpus. To demonstrate the promise of Computational Social Science for this social issue, we analyzed nearly fourteen million Twitter messages collected over a ten-month period. We demonstrated an ability to exploit a large sample to reduce bias relative to conventional data collection. We demonstrated an ability to examine data by region and gender, identifying content such as humor and metaphor that have implications for both the measurement of GBV attitudes as well as specific targets for anti-GBV campaigns. Our methods constitute an inexpensive way to engage with citizens at unprecedented scale, including the collection of public views regarding the behavior of government and business to revolutionize the conduct and measurement of anti-GBV campaigns. --- ACKNOWLEDGEMENT We are thankful to our colleagues at the United Nations Population Fund NYCespecially Upala Devi, Judy Ilag, and Maria Dolores Martin Villalba-and the Kno.e.sis Center, especially Lu Chen, current research interns Kushal Shah and Garvit Bansal from LNMIIT India, and Maria Santiago for invaluable continued support in discussion and review of our GBV research. --- s description about GBV: http://www.unfpa.org/gender/violence.htm 2 http://www.bbc.com/news/blogs-trending-31628729 3 http://www.telegraph.co.uk/news/worldnews/asia/india/11443462/Delhi-bus-rapist-blames-his-victim-in-prisoninterview.html 4 UNFPA agency: http://www.unfpa.org/public/ 5 EU FRA agency: http://fra.europa.eu/en/project/2012/fra-survey-gender-based-violence-against-women
Public institutions are increasingly reliant on data from social media sites to measure public attitude and provide timely public engagement. Such reliance includes the exploration of public views on important social issues such as gender-based violence (GBV). In this study, we examine big (social) data consisting of nearly fourteen million tweets collected from Twitter over a period of ten months to analyze public opinion regarding GBV, highlighting the nature of tweeting practices by geographical location and gender. We demonstrate the utility of Computational Social Science to mine insight from the corpus while accounting for the influence of both transient events and sociocultural factors. We reveal public awareness regarding GBV tolerance and suggest opportunities for intervention and the measurement of intervention effectiveness assisting both governmental and non-governmental organizations in policy development.
Introduction Across diverse con texts, one of the most com mon bar ri ers to using effec tive fam ily plan ning meth ods is the belief that hor monal con tra cep tives and con tra cep tive devices have adverse effects on future fer til ity . In many regions of the world, espe cially where pres sure to bear chil dren is sig nifi cant , these bar ri ers are per va sive and expressed by both men and women . Historically, these con cerns have been dismissed as "mis per cep tions," but emerg ing evi dence indi cates that such beliefs may in fact be rooted in per sonal expe ri ence or obser va tions of oth ers' slower-than-expected returns of fecun dity follow ing con tra cep tive dis con tin u a tion . Although pre vi ous reviews have gen er ally con cluded that oneyear preg nancy rates fol low ing ces sa tion of con tra cep tion are sim i lar across a range of con tra cep tive types , recent stud ies from highincome countries have indi cated that some con tra cep tives might impact fecun dity, espe cially in the short term. A 2020 study by Yland et al. using pro spec tive cohort data col lected in Denmark and North America found tran sient delays in return of fecun dity among women who stopped use of oral con tra cep tives, the con tra cep tive ring, and some longact ing revers ible con tra cep tives com pared with bar rier meth ods, with the larg est decreases in fecundability among inject able and patch users. Importantly-and in con trast to prior stud ies-the authors employed a timetopreg nancy study design for esti mat ing fecundability, or the prob a bil ity of con cep tion per men strual cycle, which is recommended to assess bio logic fer til ity in a pop u la tion . A key ques tion is the extent to which the results from the Yland et al. study, which was conducted among indi vid u als plan ning a preg nancy in Denmark and North America, gen er al ize to women in low-and mid dle-income countries given sev eral key dif fer ences in the con tra cep tive and fer til ity land scapes between high-income countries and LMICs. First, con tra cep tive for mu la tions, which refer to the types of active ingre di ents and doses found in hor monal meth ods, are not uni form across set tings . These for mu la tions are linked with dif fer ent mech a nisms of action and rates of metabolization in the body that may influ ence the return of fer til ity fol low ing dis con tin u a tion. Second, there may be dif fer ences in the sociodemographic char ac ter is tics or life course stages asso ci ated with method pref er ences and use across set tings. These con text-spe cific dif fer ences in user pro files may limit the exter nal validity of stud ies conducted in high-income countries. Third, stud ies from high-income countries have mostly focused on pat terns of fer til ity fol low ing oral con tra cep tive use , and the lim ited stud ies incor po rat ing users of the con tra cep tive inject able or implant have been based on few study par tic i pants . This lat ter lim i ta tion is espe cially concerning given the rap idly increas ing num bers of women in LMICs who use inject ables and implants . Fourth, there are geo graphic dif fer ences in the bur den of infer til ity, with higher prev a lence of both pri mary and sec ond ary infer til ity in LMICs than in high-income coun tries Reasons for these dif fer ences are not clear but may relate to dif fer ences in expo sure to untreated repro duc tive tract infec tions , HIV infec tion , postabor tion com pli ca tions, and injuries or infec tions caused or aggra vated by child birth. To date, one study by Barden-O'Fallon and col leagues eval u ated the return of fecun dity among West and East Afri can pop u la tions and found that the 12-month prob a bil ity of preg nancy was low est among those who had discontinued a hor monal method in order to become preg nant. The study, which used sin gle-dec re ment life tables, was able to explore dif fer ences in these pat terns by type of method discon tinued, age, and par ity but did not adjust for other known risk fac tors that might influ ence fecundability, such as socio eco nomic sta tus, part ner ship sta tus, and health Contraceptive Method Use and Return of Fecundity con di tions and behav iors. This study also did not com pre hen sively describe poten tial shortterm reduc tions in fecun dity, which may be enough to dis suade women from using more effec tive meth ods . The lim ited prior research on the topic of con tra cep tive use and return of fer til ity, as well as dif fer ing fer til ity con texts between the Global North and South, makes a com pel ling case for conducting a sys tem atic eval u a tion in LMICs. While there are var i ous ways to study fecundability in pop u la tions, the field of epi de mi ol ogy has made great strides in inves ti gat ing and iden ti fy ing fac tors that impact indi vid u als' or cou ples' abil ity to become preg nant using mul ti var i able-adjusted time-to-preg nancy study designs . This meth od o log i cal approach, how ever, is rarely applied to pop u la tions from LMICs. Using pooled, pop u la tion-based data from 47 LMICs, the cur rent study employs a ret ro spec tive timetopreg nancy design to rig or ously eval u ate the return of fer til ity among women who discontinue con tra cep tion in order to become preg nant. Our mult i var i able approach accounts for dif fer ing dis tri bu tions of risk fac tors for impaired fer til ity across pop u la tions that have not been fully con sid ered by prior stud ies. This study, there fore, pro vi des urgently needed quan ti ta tive evi dence about methodspe cific impacts of use on return of fecun dity in understudied set tings. Ultimately, such infor ma tion is of par a mount impor tance to poten tially val i date and address-rather than dis miss and ignore-women's con cerns about con tra cep tion and to enhance per son-cen tered coun sel ing and con tra cep tive auton omy . --- Methods --- Data and Measures We con sid ered all Demographic and Health Surveys conducted after 2010 that included a repro duc tive cal en dar mod ule in which women were asked to pro vide reasons for discontinuing a method. If a coun try had more than one sur vey in this period, we used the most recent sur vey. Forty-eight DHSs conducted between 2010 and 2018 met the inclu sion cri te ria; one sur vey was excluded because infor mation on an impor tant covariate, edu ca tion, was not included in pub licly avail able data. Online appen dix Table A1 dis plays a list of all 47 sur veys and cor re spond ing sam ple sizes included in our anal y sis. DHS cal en dar data are ret ro spec tive month-by-month his to ries cov er ing the fiveyear period prior to the inter view. The cal en dars record women's repro duc tive sta tus in each month; pos si ble states include preg nancy, birth, ter mi na tion, and con tra cep tive use or non use. In any month when a woman reported discontinuing a con tra cep tive method, she was asked why she discontinued. We lim ited our study to women with a his tory of sex ual activ ity who discontinued con tra cep tion because they "wanted to become preg nant" , which assumes that women in our study are exposed to the risk of preg nancy and are not tak ing delib er ate action to avoid preg nancy. Calendar data allowed us to deter mine the num ber of cycles postcon tra cep tive dis con tin u a tion it took women to become preg nant or if they were unsuc cess ful dur ing the period of obser va tion. For all obser va tions, timetopreg nancy inter vals began when women discontinued a method to become preg nant. Women were followed until one of the fol low ing end points, which ever occurred first: a preg nancy occurred ; a woman began using con tra ceptives again after a period of non use and no observed preg nancy ; or until three months prior to the inter view . This last end point avoids underestimating early preg nan cies at the time of the inter view that are underreported either because women do not yet rec og nize they are preg nant or because women do not yet want to dis close their preg nancy sta tus. Women who may have been in the early stages of preg nancy at the time of the inter view are still included in the study, but they are included as cen sored obser va tions . Including all months up to the sur vey inter view does not change the results. We also accounted for the pres ence of lon ger time-to-preg nancy inter vals by cen sor ing all obser va tions at 12 months among those pre sum ably at risk for preg nancy for more than a year. We imposed sev eral inclu sion/exclu sion cri te ria for our ana lytic sam ple . First, we restricted data to obser va tions for which the month fol low ing con tra cep tive dis con tin u a tion was coded as either "not using" or "preg nancy" . Second, we excluded obser va tions for which con tra cep tive dis con tin u a tion occurred within the three months prior to the inter view to account for poten tial underrecognition of preg nan cies at the time of the sur vey . Third, to reduce the threat of recall bias , we lim ited our anal y sis to women who discontinued a con tra cep tive in the two years prior to the sur vey, which led to the exclu sion of an addi tional 61,753 obser va tions. In addi tion, if a woman con trib uted more than one eli gi ble obser va tion , we used the most recent one, so our unit of anal y sis is women, rather than epi sodes. We also excluded obser va tions reporting less com monly used meth ods such as the female con dom and those using the lac ta tional amen or rhea method . Lastly, we excluded those miss ing data on key covariates mea sured in all sur veys . The final sam ple size for our main anal y sis com prised 33,827 women attempting preg nancy, representing 25,641 pregnan cies and 128,263 monthly cycles. Because the num ber of eli gi ble women for analy sis for some countries and meth ods was small , we pooled data across all sur veys to ensure an ade quate sam ple size for com par ing time-topreg nancy by prior con tra cep tive method used. Our main inde pen dent var i able-con tra cep tive method discontinued-was cat e go rized by method type. We included meth ods in the anal y sis if at least 500 women in the pooled sam ple reported using that method to ensure an ade quate num ber of method-spe cific obser va tions for anal y sis; meth ods meet ing this cri te rion are the oral con tra cep tive pill, IUD, inject able, male con dom, implant, peri odic absti nence, and with drawal. For anal y sis, we grouped peri odic absti nence and with drawal into a cat e gory of tra di tional meth ods. The sur veys included in our study did not col lect fur ther infor ma tion on what type of pill, IUD, implant, or inject able was used, so we were unable to fur ther dis ag gre gate these meth ods by more spe cific char ac ter is tics . We con sid ered sev eral confounding fac tors for anal y sis that are prob a ble risk factors for impaired fecundity or have been empir i cally asso ci ated with fecundability in prior stud ies. To account for reduced fecundability asso ci ated with age, we included a cat e gor i cal var i able with the fol low ing clas si fi ca tion, which was based on respondents' age at the time of dis con tin u a tion: 15-19, 20-29, 30-34, 35-39, and 40 or older. Information on coi tal fre quency and part ner char ac ter is tics was unavail able. Instead, we used a threecat e gory mea sure of union sta tus that incor po rates whether women were in a polyg y nous union Some research has suggested that infer til ity and fecundability are pat terned by socio eco nomic attri butes such as edu ca tion and income . These pat terns do not reflect inher ent bio log i cal dif fer ences across socio eco nomic posi tion but instead are medi ated by behav ioral and life style char ac ter is tics, as well as access to health care over the life course. We there fore included var i ables mea sur ing socio eco nomic posi tion or access to health care that may help reduce the threat of resid ual confounding for risk fac tors cor re lated with impaired fecun dity. The first, edu ca tion, was coded as no edu ca tion, pri mary, sec ond ary, or higher. The sec ond was a mea sure of house hold wealth that was coded according to the DHS wealth quin tile clas si fi ca tion for each coun try based on assets and house hold char ac ter is tics . We also included a mea sure of urban ver sus rural res i dence based on urban and rural clas si fi ca tions for each coun try. We included three sex ual and repro duc tive health mea sures that may influ ence fecundability. Parity at the time of con tra cep tive dis con tin u a tion was assessed as a binary var i able . As noted ear lier, expo sure to untreated STIs may affect fecun dity. We there fore included a mea sure of STI his tory that was assessed from ques tions ask ing if par tic i pants had an STI or symp toms of an STI in the 12 months prior to the sur vey; any indi ca tion of an STI or STI symp toms was coded as yes . Our third mea sure assessed whether the respon dent reported cor rect knowl edge of the fer tile period dur ing an ovu la tory cycle , as this knowl edge could be used to opti mize the chance of preg nancy in each cycle . Our ana ly ses also included two known risk fac tors for infer til ity, body mass index and expo sure to tobacco prod ucts . We cal cu lated BMI from weight and height data that were mea sured directly dur ing the sur vey and cat e go rized the mea sure according to the con ven tional WHO clas si fi ca tion of adult under weight , nor mal , over weight , and obese . Our sec ond mea sure was a com pos ite binary indi ca tor for use of tobacco prod ucts, deter mined from sev eral ques tions assessing cig a rette, cigar, and chewing tobacco use mea sured at the time of the sur vey . All covariates except for age and par ity were mea sured at the time of the sur vey; age and par ity sta tus corresponded to when the woman discontinued con tra cep tion. All sur veys in our ana ly ses contained the fol low ing mea sures: age, par ity, edu ca tion, urban or rural res i dence, wealth, union sta tus, and knowl edge of the fer tile period. Measures of BMI, recent his tory of an STI, and use of tobacco prod ucts were not avail able for all sur veys. Therefore, in a sen si tiv ity anal y sis, we tested whether our results were robust to a more exten sive set of con found ers in a sub sam ple of countries that had all avail able covariates . --- Statistical Analysis First, we used the Kaplan-Meier method to esti mate sur vival curves and one-year prob a bil i ties of preg nancy sep a rately for each eli gi ble con tra cep tive method. We also cal cu lated median time to preg nancy for each method using the num ber of months when at least 50% of women became preg nant. Second, we used Cox pro por tional haz ard mod els for dis crete sur vival data to model time to preg nancy and esti mate fecundability ratios . FRs com pare the odds of becom ing preg nant between the exposed and unex posed groups; an FR less than 1 indi cates that the exposed group expe ri enced decreased odds of preg nancy com pared with the unex posed or ref er ence group within the first year after con tra cep tive dis con tin u a tion. These mod els account for changes in the aver age fecundability of the pop u la tion at risk over time, which result from more fecund women being removed from the risk set in later months. All mod els accounted for right-cen sor ing and included coun try fixed effects to con trol for unob serv able charac ter is tics within each coun try. Tests of proportionality, includ ing visual inspec tion of loglog sur vival plots, showed that the proportionality assump tion was gen er ally upheld. We assumed that women using tra di tional meth ods or con doms served as appro pri ate coun ter fac tu als for women using meth ods pre vi ously hypoth e sized to affect the return of fecun dity fol low ing dis con tin u a tion, such as hor monal meth ods and IUDs. In the main anal y sis, women using tra di tional meth ods were selected as the Contraceptive Method Use and Return of Fecundity ref er ence cat e gory owing to con cerns that con dom users may dif fer from tra di tional method users with regard to their STI or HIV risk, which could impact time to pregnancy . That said, we also inves ti gated whether infer ences were the same when we used con dom users as the ref er ence group, as this would pro vide addi tional sup port for the idea that hor monal meth ods and IUDs influ ence future fecun dity because of bio log i cal mech a nisms of action. We conducted sev eral addi tional sen si tiv ity ana ly ses to eval u ate the robust ness of our find ings. First, for users of inject ables, we assumed an addi tional lag of three months to account for the pos si bil ity that women may have received their last injec tion in the month they reported discontinuing the method, and there fore could be fully protected from preg nancy up to three months. Second, as described ear lier, we lim ited our sam ple to sur veys that included the full set of covariates, includ ing BMI and tobacco use, to exam ine whether our results were robust to their inclu sion. Third, we conducted all ana ly ses sep a rately for women aged 40 or older, as any poten tial reductions in fecun dity could be ampli fied for this age group. And finally, we expanded our sam ple to all eli gi ble epi sodes within the entire five-year con tra cep tive cal en dar. Following prior multicountry DHS stud ies , we used cus tom weights account ing for com plex sam pling designs to allow each coun try to con trib ute equally to the pooled anal y sis; this approach ensures that results are not weighted more heavily toward sur veys with larger sam ple sizes. Specifically, we mul ti plied the DHS-pro vided sur vey weights by a coun try-spe cific con stant, such that the sam ple of women from each of the 47 coun tries in our anal y sis makes up 1/47th of the pooled sam ple, the der i va tion of which is outlined in detail else where . As an addi tional robust ness check, we also conducted a jack knife anal y sis to ensure that results were not driven by countries with larger sam ple sizes. Statistics pres ent unweighted ns and weighted per cent ages. Analyses were conducted in Stata 14.0 using the svy suite of com mands. Ethics approval was obtained by the insti tu tions that admin is tered the sur veys, and all ana ly ses used anonymized data bases. --- Results Characteristics of the study sam ple are presented in Table 1. The major ity of women were in their 20s , had at least one prior birth , and had at least a pri mary edu ca tion . Most women were in a union, and 8% reported being in a poly g ynous union. A lit tle less than one third of women reported cor rect knowl edge of the fer tile period. Almost half of women in the weighted sam ple were from the sub-Saharan Africa region, whereas less than 10% were from either Europe or South Asia. Descriptive sta tis tics for users of each con tra cep tive method type are also pre sented in Table 1. Women who discontinued inject ables and pills made up 31% and 26% of the weighted sam ple, respec tively. Sixteen per cent of the weighted sam ple discontinued either peri odic absti nence or with drawal , 13% dis continued con doms, 8% discontinued IUDs, and 6% discontinued implants. There were also sociodemographic and regional dif fer ences by type of con tra cep tive dis continued, which pro vide strong moti va tion for mul ti var i able anal y sis. Figure 2 pres ents Kaplan-Meier sur vival curves of time to preg nancy by type of method discontinued. For ease of com par i son, both pan els include the same ref er ence curve for tra di tional meth ods represented by the black line. The top panel pres ents addi tional curves for the IUD and the pill, and the bot tom panel pres ents addi tional curves for the implant and the inject able. Condoms are not included because they over lap closely with tra di tional meth ods. Both fig ures dem on strate that users of the IUD, pill, implant, and inject able experi ence lon ger times to preg nancy than users of tra di tional meth ods. These curves are quan ti fied in Table 2, which dis plays the median time to preg nancy and 12-month prob a bil i ties of preg nancy observed for each method. The median TTP for tra di tional method and con dom users fol low ing dis con tin u a tion is two months, while median TTP for pill and IUD users is three months. Those using the implant and the inject able expe ri ence a median TTP of four and five months, respec tively. The median TTP for users of the inject able short ens to two months after account ing for a threemonth lag. As evidenced by the Figure 2 curves and Table 2 data, there are also dif fer ences in 12-month prob a bil i ties of preg nancy. Traditional users had the highest prob a bil ity at 91% , followed by women using the con dom , the pill , and the IUD . Women discontinuing inject ables and implants had the low est 12-month prob abil i ties of preg nancy-each at 80% . Thus, among women discontinuing inject ables or implants in order to become pregnant, approx i ma tely 1 in 5 did not achieve preg nancy in a year, on aver age, com pared with approx i ma tely 1 in 10 women using tra di tional meth ods. Women aged 40 or older had lon ger median TTPs by con tra cep tive type discontin ued, as well as reduc tions in the 12-month prob a bil ity of preg nancy for all meth ods. Among older women who discontinued tra di tional meth ods, the 12-month prob a bil ity of preg nancy was 81% , which is approx i ma tely 10 per cent age points lower than the prob a bil ity among all women of repro duc tive age who also dis continue tra di tional meth ods; this dif fer ence likely cap tures well-known age-related declines in fecun dity. Notably, 12-month prob a bil i ties of preg nancy were much lower for older women who discontinued either hor monal meth ods or the IUD com pared with all women of repro duc tive age. For exam ple, about 64% of women aged 40 or older became preg nant within a year fol low ing dis con tin u a tion of inject ables, on aver age, com pared with 80% among all women of repro duc tive age. Table 3 pres ents results from a mul ti var i able model that accounts for poten tial dif fer ences in under ly ing fecun dity between women. The base line model adjusts for age, par ity, edu ca tion, urban or rural res i dence, union sta tus, edu ca tion level, and knowl edge of the fer tile period; the model also includes coun try fixed effects. The first col umn in Table 3 employs users of tra di tional meth ods as the ref er ence cat e gory. Compared with these indi vid u als, users of the pill, IUD, inject able, and implant had lower fecundability ratios fol low ing con tra cep tive dis con tin u a tion. The larg est reduc tions in odds occurred among women who used inject ables or implants: 0.41 and 0.51 , respec tively. Patterns are largely sim i lar when employing con dom users as the ref er ence group , although FRs increase slightly. There were no sig nifi cant dif fer ences in fecundability between con dom users and tra di tional users. Findings remain sim i lar after conducting sev eral sen si tiv ity ana ly ses, with some excep tions. First, after account ing for a threemonth lag for inject able users, we found that the adjusted FR increases from 0.42 to 0.66 . Second, we reran our ana ly ses among a sub set of sur veys that col lected infor ma tion on the full set of covariates and found that results do not change sub stan tially. Third, as shown in col umn 7, and mirroring our age-spe cific results in Table 2, we find large reduc tions in fecundability ratios for women aged 40 or over by con tracep tive type after adjust ment for covariates. Fourth, when we expand our anal y sis to all eli gi ble epi sodes that occur within five years of the sur vey, we find sim i lar results for all meth ods except for con dom users. Specifically, con dom users have a lower fecundability ratio than tra di tional users that was not observed in our main anal y sis . Finally, results do not change after conducting a jack knife anal y sis. --- Discussion In this anal y sis using pooled data from 47 LMICs, we found that some con tra ceptive meth ods when used prior to attempting to get preg nant are asso ci ated with tran sient delays in return of fecun dity, with the lon gest delays occur ring among women who discontinued inject ables and implants. These rela tion ships persisted after adjust ment for impor tant con found ers, suggesting that women's con cerns about poten tial shortterm reduc tions in fecun dity fol low ing use of cer tain con tra cep tives are not unfounded. We acknowl edge that our results can be interpreted dif fer ently by fer til ity researchers. While our find ings show that at least half of women will become preg nant within 2-3 months fol low ing dis con tin u a tion of tra di tional meth ods, con doms, the pill, and the IUD, we see dif fer ent pat terns for inject ables and implants-two meth ods that are widely pro moted and used across LMICs. More impor tantly, because fecun dity is het ero ge neous , the median esti ma tes of time to preg nancy presented in Table 2 do not suf fi ciently cap ture how the entire dis tri bu tion of time to preg nancy shifts to the right fol low ing dis con tin u a tion of hor monal meth ods. This dis tri bu tional shift leads to lower 12-month prob a bil i ties of preg nancy for users of hor monal methods than for those who discontinue tra di tional meth ods. These impacts are rarely dis cussed by fam ily plan ning research ers but may lead to notice able dif fer ences within com mu ni ties and social net works . As an exam ple, in a hypo thet i cal pop u la tion of 10,000 women who discontinue inject ables or implants, nearly 2,000 of these women may still not expe ri ence preg nancy one year later. This delay is twice as long as what we would expect from a pop u la tion of 10,000 women who discontinue tra di tional meth ods. Our study cor rob o rates some, but not all , find ings from Yland et al. , who eval u ated the asso ci a tion between pregravid con tra cep tive use and sub se quent fecundability in Denmark and the United States. Similar to Yland et al., we find that users of inject ables have the lon gest delays in return of fer til ity; both stud ies found aver age or median times-to-preg nancy of about five months . However, our study find ings diverge from those of Yland et al. regard ing other con tra cep tive types. For exam ple, whereas Yland et al. found that users of IUDs had increased time to pregnancy com pared with users of bar rier meth ods, we do not find this asso ci a tion. Our study also dif fers in that we find sub stan tial reduc tions in fecundability ratios among implant users, whereas this rela tion ship was not appar ent in the Yland et al. study. These dif fer ences could arise from use of dif fer ent for mu la tions of hor monal con tra cep tives across con texts as well as the larger num ber of implant obser va tions in the cur rent study: n = 1,373 in this study ver sus n = 186 in Yland et al. . Our study also builds on the find ings of Barden-O'Fallon et al. , which found lower returns to preg nancy by 12 months among women in West and East Africa who discontinued hor monal meth ods. Taken together, these results indi cate that pre vi ous reviews on the topic , which suggested no impact, should be urgently updated to incor po rate new evi dence. Moreover, future research should eval u ate the poten tial bio chem i cal or biobehavioral path ways under pin ning these rela tion ships, which so far remain spec u la tive. Critically, these find ings have impli ca tions for fam ily plan ning pro grams in LMICs. Several global efforts, includ ing FP2030 and the Sustainable Development Goals, empha size increas ing the use of mod ern con tra cep tives. However, these efforts are poten tially at odds with women's con tra cep tive pref er ences and con cerns. Our find ings bol ster the crit i cal need for increased per son-cen tered ness in fam ily plan ning coun sel ing and pro vi sion, in line with wider calls to shift the needle on fam ily plan ning "successes" away from just "use" to max i miz ing auton omy and use of pre ferred meth ods . More con cretely, our find ings indi cate that the accept abil ity of delayed return of fer til ity should be eval u ated when recommend ing and choos ing con tra cep tive meth ods. Our study has sev eral strengths. First, we use pop u la tionbased data that allowed us to account for poten tial dif fer ences in pop u la tion com po si tion and under ly ing fecun dity across set tings. Second, our sam ple had a large num ber of obser va tions of women who discontinued inject ables and implants. By con trast, inject able and implant users from the Yland et al. study represented only 0.5% and 1.0% of par tic i pants, respec tively. Third, there are few stud ies from LMICs that inves ti gate deter mi nants of fecundability and infer til ity. Our use of cal en dar data adds to the lim ited lit er a ture by employing a timetopreg nancy study design most often used in higher resource set tings. There are also sev eral lim i ta tions to note. First, except for age and par ity, all covari ates were mea sured at the time of the sur vey, not at the time of con tra cep tive dis con tin u a tion. It is unclear if this type of mis clas si fi ca tion might bias our main results, since our mea sure of prior con tra cep tive type used does not suf fer from this same error. Second, we relied on ret ro spec tive cal en dar data, which are sub ject to recall bias and other types of reporting errors . In their report assess ing qual ity of the DHS con tra cep tive cal en dar, for exam ple, Bradley and col leagues' results sug gest worse reporting for events fur ther in the past. To address this con cern, we lim ited our obser va tions to the two years prior to the sur vey, although sen si tiv ity ana ly ses using all five years prior to the sur vey gen er ally yield sim i lar results. Third, for users of inject ables, we did not have data on when women received an injec tion rel a tive to when she reported dis con tin u a tion, although we did include a threemonth lag in our sen si tiv ity ana ly ses. Fourth, owing to data lim i ta tions, we could not dis tin guish between method type for inject ables and IUDs. We note, however, that in low-resource set tings, many IUDs are cop per, rather than hor monal, and some inject ables are for mu lated to pro vide con tra cep tion for one or two rather than three months . While DMPA, which pro vi des three months of pro tec tion, remains the most com mon type of inject able in LMICs, there is some var i a tion in the inject able mix across set tings , although this is not welldocumented. Fifth, because inter view ers could record only one con tra cep tive method per month, dis con tin u a tion of mul ti ple con tra cep tive meth ods is not pos si ble to mea sure. Reports of using tra di tional meth ods like absti nence and with drawal may also suf fer from poor reli abil ity com pared with use of hor monal meth ods . Sixth, the DHS data we used do not include infor ma tion on regu lar sex ual activ ity, part ners, and other mea sures that could influ ence fecundability. Measures of peo ple's under ly ing fecun dity or pro pen sity for infer til ity were also not pos si ble to esti mate. Seventh, we did not include mea sures of sex ual vio lence and inti mate part ner vio lence in our study, even though prior research sug gests that these expe ri ences may influ ence health out comes, includ ing STI trans mis sion . A final lim i ta tion is that we can not val i date two key assump tions of this study: that women's desire to become preg nant fol low ing con tra cep tive dis con tin u a tion was Contraceptive Method Use and Return of Fecundity sta ble over time and that women were actively try ing to become preg nant over the expo sure period. As noted in prior research, shortterm changes in preg nancy inten tion have been well-documented in sev eral con texts , and preg nancy ambiv a lence is also com mon . --- Conclusion Many women in LMICs either do not use con tra cep tion or discontinue con tra cep tive meth ods for fear that con tra cep tion will inhibit their future fer til ity. Although return of fecun dity is acknowl edged in the WHO Medical Eligibility Criteria for Contracep tive Use , con tra cep tive coun sel ing pro to cols and tools used in LMICs may not include nuanced infor ma tion about return to fecun dity fol low ing dis con tin u a tion, even though this remains a com mon con cern among women. Furthermore, although the WHO MEC discusses poten tial effects of inject ables on return to fer til ity, there is no men tion of other revers ible meth ods. Our novel find ings on the con tra cep tive implant, in par tic u lar, war rant increased atten tion within the fam ily plan ning com mu nity. While we rec og nize that the pres ent anal ysis has lim i ta tions, we hope our study prompts fur ther research on this his tor i cally overlooked topic. Ultimately, our results indi cate that delayed return to fecun dity after discontinu ing some hor monal meth ods is a com mon expe ri ence in LMICs, pro vid ing what we believe to be some of the first multicountry evi dence to val i date women's lived experi ences from these regions. Contraceptive coun sel ing pol icy and pro grams, there fore, should con sider inte grat ing this infor ma tion to pro vide a fuller pic ture of the range of timetopreg nancy expe ri ences fol low ing con tra cep tive dis con tin u a tion, espe cially for inject ables and implants. While infor ma tion about poten tial declines in fecun dity is just one cri te rion that may influ ence women's con tra cep tive use, indi vid u als have a right to this knowl edge so that they can make informed choices . ■
One of the most com mon bar ri ers to using effec tive fam ily plan ning meth ods is the belief that hor monal con tra cep tives and con tra cep tive devices have adverse effects on future fer til ity. Recent evi dence from highincome set tings sug gests that some hor monal con tra cep tive meth ods are asso ci ated with delays in return of fecun dity, yet it is unclear if these find ings gen er al ize to low-and mid dle-income pop u la tions, espe cially in regions where the inject able is widely used and pres sure to bear chil dren is sig nifi cant. Using repro duc tive cal en dar data pooled across 47 Demographic and Health Surveys, we find that the unad justed 12-month prob a bil ity of preg nancy for women attempting preg nancy after discontinuing tra di tional meth ods, con doms, the pill, and the IUD ranged from 86% to 91%. The 12-month prob a bil ity was low est among those who discontinued inject ables and implants, with approx i ma tely 1 out of 5 women not becom ing preg nant within one year after dis con tin u a tion. Results from mul ti var i able anal y sis showed that com pared with users of either peri odic absti nence or with drawal, users of the pill, IUD, inject able, and implant had lower fecundability fol low ing discon tin u a tion, with the larg est reduc tions occur ring among women who used inject ables and implants. These find ings indi cate that women's con cerns about poten tial short-term reduc tions in fecun dity fol low ing con tra cep tive use are not unfounded.
Emerging research has also demonstrated that residing in highly cohesive neighbourhoods may strengthen a child's ability to positively cope with adversity . For example, among children exposed to maltreatment, those who report stronger social ties with adults in their community, including their parents, extended family members, and schoolteachers, tend to report better overall adjustment compared with children who report weaker social ties with community members . Children with stronger community ties also score more highly on measures of resiliency, and do not present with the elevated levels of antisocial behaviour typically seen among children exposed to maltreatment . This raises the possibility that for children and adolescents exposed to SLEs, living in a cohesive community may buffer the adverse, long-term impacts of these stressors. To our knowledge, however, no population-based, longitudinal study has tested whether social cohesion moderates the relationship between childhood adversity and subsequent common mental and behavioural disorders. We therefore sought to investigate the role of neighbourhood social cohesion as a potential modifier of the associations between exposure to stressful life events in early adolescence and symptoms of mental and behavioural disorders two years later, using well-characterized longitudinal data from a large, prospective cohort. We hypothesized that neighbourhood social cohesion would buffer the effect of stressful life events on negative outcomes in adolescence. --- Methods --- Data source Data for the present study were drawn from cycles 5 , 6 , 7 , and 8 of the National Longitudinal Survey of Children and Youth . The NLSCY is a longitudinal study of Canadian children and adolescents designed to track multiple aspects of youth health and development. Stratified sampling resulted in a sample that is considered representative of children living in private homes in Canada's 10 provinces . Cohort members were followed prospectively, with assessments from multiple informants every two years. Statistics Canada obtained written informed consent from parents of survey respondents and regulates access of survey data through National research data centres. The present sample was based on 5913 respondents who were aged 12/13 in cycles 5, 6, or 7, and for whom data were available 2 years later at ages 14/ 15 , during cycles 6, 7, or 8. --- Measures --- Mental and behavioural disorders Adolescent psychiatric symptoms were self-reported at T1 and T2 using behaviour scales adapted from questionnaires used in the Montreal Longitudinal Study and the Ontario Child Health Study . The scales were designed to identify children who would be most likely to meet DSM-III-TR criteria for a psychiatric diagnosis. Paper questionnaires were completed privately by adolescents, and returned to interviewers in a sealed envelope. For the present study, the following subscales were of interest: anxiety/depression , physical aggression/conduct disorder , property offence , and hyperactivity/ inattention . Adolescents responded on a 3-point scale . Subscale scores, pro-rated for item-level missingness, were provided by Statistics Canada. For each outcome, subscale scores were dichotomized at the top decile to indicate psychopathology of potential clinical relevance, consistent with previous studies . Adolescent suicidal behaviour was assessed at T2 on the basis of two questions. First, adolescents were asked whether they had seriously considered suicide in the past 12 months, with suicidal ideation defined as answering 'yes' to this question. Second, adolescents who screened positive for suicidal ideation were additionally asked how many times they had attempted suicide in the past year. For the present study, suicide attempt was defined as one or more attempts in the past year. --- Stressful life events [SLEs] Adolescent exposure to SLEs in the past 2 years was reported by the person most knowledgeable about the child at T2 . Respondents were asked whether the participant had experienced an event that caused the participant 'a great amount of worry or unhappiness' in the past 2 years . Those who answered 'yes' were then asked about 13 specific life events . For the present study, we defined exposure to SLEs in the previous two years as a binary variable . --- Neighbourhood social cohesion Primary caregivers reported on neighbourhood social cohesion at T1. The social cohesion score was based on 5 statements, rated on a 4-point scale from 'strongly agree' to 'strongly disagree' . Scores for these items were summed to create a total score for social cohesion, ranging from 0-15. For the present study, social cohesion was dichotomized at the first quartile . --- Covariates Adolescent sex was reported by the primary caregiver at T1. Caregivers reported on adolescent ethnicity at T1. When possible, data from earlier cycles was carried forward by Statistics Canada to replace missing data on this variable. Depressive symptoms in the primary caregiver were assessed at T1 using the Depression Rating Scale, a shortened 12-item version of the CES-D . For the present study, caregiver depression was operationalized as a score in the top 10% on the depression scale. Family poverty was assessed using the ratio of income to the corresponding low-income cut-off . LICO is defined as the income below which a family would have difficulty making ends meet, and is based on family size and geographic area Psychological Medicine . For the present analyses, this ratio was dichotomized at 1 . Family composition was reported by the primary caregiver at T1. For the present analysis, we dichotomized this variable to consider adolescents living with two biological parents v. those living in other family structures . Primary caregivers reported on their levels of social support using the 8-item social support scale. Caregivers rated their agreement with each item on a 4-point scale, from 'strongly agree' to 'strongly disagree'. For these analyses, social support was dichotomized at the bottom quartile. Finally, perceived neighbourhood safety was assessed in the NLSCY using a 3-item scale. Caregivers rated their agreement with each statement on a 4-point scale, from 'strongly agree' to 'strongly disagree'. Scores on neighbourhood safety were trichotomized to reflect low , average , and high safety. --- Analysis Separate multivariable logistic regression models were estimated for each mental health outcome. First, we established whether SLE exposure was associated with each outcome by fitting a model including baseline mental illness symptoms, stressful life events, neighbourhood social cohesion, ethnicity, sex, caregiver depression and family poverty. p < 0.05 was considered to be statistically significant. To test the modifying effect of neighbourhood social cohesion on the association between SLE exposure and each outcome, we fitted an interaction term between SLE exposure and social cohesion, and tested whether this improved model fit via Score χ 2 tests. In the presence of effect modification, we reported stratified effects of SLEs on each outcome, in low and higher social cohesion neighbourhoods, as defined above. Normalized survey weights based on derived weights generated by Statistics Canada were used to take into account the complex survey design. Cases with missing data on the exposure or effect modifier were listwise deleted. All analyses were conducted using SAS software . --- Results Adolescents who had experienced SLEs in the past 2 years were more likely to be white, have a depressed primary caregiver, have a family income below the corresponding low-income cutoff, and were less likely to be living with two biological parents . --- Predicting adolescent mental health The available sample size with complete data varied depending on the outcome investigated, from 3629 for hyperactivity to 3776 for suicidality . Those missing data on T1 variables were more likely to be male, non-white, living with a depressed caregiver, living in poverty, in low safety neighbourhoods, have a caregiver with low social support, and less likely to be living with 2 biological parents . Those who dropped out between T1 and T2 were more likely to be male, white, and to live in neighbourhoods characterized by low cohesion and low safety . --- Depression/anxiety In our main model, living in a cohesive neighbourhood was protective against adolescent depression/anxiety . There was a significant interaction between SLEs and neighbourhood cohesion . In low-cohesion neighbourhoods, SLEs were significantly and positively associated with adolescent depression/anxiety , but no effect was observed in higher cohesion neighbourhoods . --- Suicidal ideation Adolescents who had experienced SLEs in the past two years were more likely to report suicidal ideation than those who had not . However, this main effect was superseded by a significant interaction between SLEs and neighbourhood cohesion ; thus, the effect of SLEs on adolescent suicidal ideation was substantially greater in low social cohesion neighbourhoods than higher cohesion neighbourhoods . --- Suicide attempt Similarly to our findings for suicidal ideation, adolescents who had experienced SLEs in the past two years were also more likely to have attempted suicide than those who had not . The interaction between SLEs and neighbourhood social cohesion was not statistically significant ; however, inspection of the stratified results suggested that SLEs had a significant effect on suicide attempt among adolescents in low cohesion neighbourhoods but not higher cohesion neighbourhoods . Aggression/conduct disorder Adolescents who had experienced SLEs had higher odds of elevated aggressive/conduct symptoms . The interaction between SLEs and neighbourhood cohesion was also statistically significant , and as for most other outcomes, the effect of SLEs was stronger in lowcohesion neighbourhoods , but absent in higher cohesion neighbourhoods . --- Property offence Similarly, the experience of SLEs was associated with increased risk of property offence , with strong evidence of an interaction between SLEs and neighbourhood cohesion suggesting that this association was significantly stronger in low social cohesion neighbourhoods than in higher social cohesion neighbourhoods . --- Hyperactivity Neither stressful life events, neighbourhood social cohesion, nor their interaction significantly predicted adolescent hyperactivity . --- Sensitivity analysis We conducted a post hoc sensitivity analysis exploring the effects of exposure to SLEs across three levels of neighbourhood social cohesion . Results suggested that the moderating effects of social cohesion were largely constrained to the lowest quartile: SLEs were significantly associated with mental health and behavioural outcomes at the lowest levels of social cohesion, but not in moderate or high cohesion neighbourhoods. --- Discussion The association between SLEs and four out of six major mental health or behavioural outcomes in young adolescents was stronger amongst those living in low social cohesion neighbourhoods than higher social cohesion neighbourhoods measured two years earlier in this longitudinal cohort study. A trend to this effect was found for a fifth outcome, suicide attempts, although no discernable effects were apparent for our final outcome, hyperactivity. Associations between exposure to SLEs and psychiatric symptoms were attenuated to the null for adolescents living in neighbourhoods with higher levels of social cohesion. These results could not be explained by differences in income, sex, ethnicity, family structure, social support, neighbourhood safety, mental health at baseline, or depression in the primary caregiver. The consistency of our results suggest that neighbourhood social cohesion may effectively buffer children and adolescents from the otherwise potentially deleterious effects that SLEs can have on future mental health and behavioural problems. If causal, our findings strongly suggest that efforts to improve neighbourhood social cohesion, specifically here, amongst teenagers, will have positive effects on future mental health. --- Neighbourhood social cohesion and mental health Social cohesion was, on its own, associated with only one of the adolescent mental health outcomes assessedsymptoms of depression and anxiety. Across the majority of outcomes, higher social cohesion appeared to buffer the effects of exposure to SLEs. Among adolescents residing in neighbourhoods characterized by low social cohesion, the recent experience of SLEs was associated with increased risk of depression/anxiety, suicide ideation and attempt, aggression/conduct disorder, and property offence. In higher cohesion neighbourhoods, the effects of SLEs on these psychiatric symptoms were attenuated to the null. Exposure to SLEs in childhood and adolescence has been consistently linked to later psychiatric illness , including internalizing and externalizing problems , and Psychological Medicine suicidal behaviour . Few studies have previously investigated whether neighbourhood social cohesion buffers such stressors, although one study showed that higher perceived neighbourhood cohesion attenuated the effects of maternal hostility on child externalizing behaviours, including symptoms of conduct disorder and property offences . Two further studies from the same sample have shown that greater neighbourhood social cohesion moderates the effects of childhood poly-victimization on early and late adolescent psychotic symptoms ). Our results suggest similar mechanisms may be at play with regards to neighbourhood social cohesion. At the individual level, social support has long been hypothesized to buffer against the effects of stress on mental health. Evidence suggests this may operate in at least two ways . First, the perceived availability of social support can lead to more benign cognitive appraisals of stressors as they are encountered, and second, the experience of social support during a time of stress can lead to a dampening of the behavioural and even physiological responses to stressors . Similar mechanisms may apply to adolescents living in socially cohesive neighbourhoods following exposure to an SLE. Notably, our results were not explained by caregiver social support at the individual level, suggesting that social processes operating at the wider neighbourhood environment may be at play. It is also possible that those living in more socially cohesive neighbourhoods benefit from social learning via increased exposure to multiple adult role models, or more generally from positive emotional and instrumental support between neighbours . Our findings suggest further longitudinal research is warranted to tease out potential pathways between SLEs, social cohesion and adolescent mental health. --- Alternate interpretations Results of sensitivity analysis suggested that the moderating effects of social cohesion were most pronounced for children living in neighborhoods with the lowest levels of social cohesion; that is, there may be a threshold of social cohesion above which additional incremental improvements have little effect on resilience. Alternately, these results can be viewed as evidence for a 'double disadvantage' effect, whereby the deleterious effects of life stressors on mental health are only evident among adolescents additionally exposed to suboptimal neighbourhood conditions. Beyond social cohesion, other neighbourhood factors have also been reported to moderate the associations between acute SLEs and psychiatric outcomes, but only below certain thresholds; for example, strong associations between SLEs and increased aggression appear to be restricted to children living in the most economically disadvantaged neighbourhoods . Importantly, our findings were impervious to adjustment for neighbourhood safety, lending credence to the possibility that neighbourhood social cohesion had moderating effects on various mental health outcomes following exposure to SLEs, over and above the influence of neighbourhood structural disadvantage. Whether these results are interpreted as a buffering effect of higher levels of social cohesion, or an amplification of the negative effects of SLEs by low social cohesion, they nonetheless suggest that improving low social cohesion may have beneficial consequences for youth exposed to life stress. --- Strengths and limitations We acknowledge some limitations to the present study. Caregiver report of neighbourhood social cohesion may reflect certain aspects of their personality and behaviour, introducing bias. For example, parents who are actively involved in the community may also be more likely to promote adaptive behaviour in their children. We sought to minimise this by adjusting for caregiver depression and individual-level social support, however, we were unable to control for other aspects of the primary caregiver's mental health and behaviour, including parenting practices. Adolescent psychiatric symptoms were assessed using self-report scales, and as such may not reflect psychiatric diagnoses. Although these scales were designed to correspond to DSM-III criteria, they are not intended as diagnostic instruments. SLEs were assessed via retrospective caregiver report between T1 and T2. While such data are potentially subject to recall bias , it has been suggested that self reports can be reliable for relatively rare and important events . The short timeframe for recall in the present study also increases confidence in the reliability of the reports. For certain events , caregivers may have withheld information out of fear of recrimination or social desirability. Future studies may need to use a multi-informant approach to assess exposure to stressful life events in a more objective way. Finally, we did not differentiate between the 13 different types of stressors assessed. However, studies examining multiple types of adverse childhood experiences have reported largely non-specific effects on mental health . These limitations were balanced by notable strengths. Our study leveraged data from a large population-based prospective sample of adolescents. Additionally, the use of prospectively collected data and adjustment for baseline symptoms allowed for clarity in the temporality of relationships between neighbourhood cohesion and mental health. --- Public health implications The consistency of our findings strengthens the possibility that neighbourhood cohesion in early adolescence may mitigate mental health problems for teenagers exposed to stressful life events in childhood. Given adolescence is a key period for the emergence of mental health disorders , which often predicts worse adulthood physical, mental and social outcomes , identifying modifiable prevention targets is a central public mental health concern. We suggest that selected intervention strategies to promote social integration amongst youth who have recently experienced SLEs could be warranted, particularly given that over 1 in 5 adolescents in our sample had experienced at least one SLE in the two years before assessment. These could include helping such individuals develop and maintain peer relationships, known to be of central importance to adolescent health and well-being , or by establishing or enhancing school-or community-based initiatives which promote conditions for greater prosocial behaviours and social cohesion . Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S0033291719001235. --- Conflict of interest. None. Ethical standards. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.
Background. Exposure to stressful life events is an established risk factor for the development of adolescent mental disorder. Growing evidence also suggests that neighbourhood social environments, including strong social cohesion, could have a protective effect on mental health. However, little is known about how neighbourhood social cohesion may buffer against the effects of stressful life events on adolescent mental health. Our aim was to assess whether neighbourhood social cohesion modifies the association between stressful life events and adolescent mental health outcomes. Methods. Data were drawn from a nationally-representative prospective sample of Canadian adolescents, including 5183 adolescents aged 12/13 years at T1 and 14/15 years at T2. Caregivers reported neighbourhood social cohesion at T1, and exposure to stressful life events between T1 and T2. Symptoms of mental health and behaviour problems were self-reported by adolescents at T1 and T2. Multivariable logistic regression was used to determine whether the relationship between stressful life events and outcomes was modified by neighbourhood social cohesion. Results. Associations between stressful life events and adolescent outcomes were statistically significantly lower in neighbourhoods with greater social cohesion for: depression/anxiety (high cohesion OR = 0.98 v. low cohesion OR = 3.11), suicidal ideation (OR high = 1.30 v. OR low = 5.25), aggression/conduct disorder (OR high = 1.09 v. OR low = 4.27), and property offence (OR high = 1.21 v. OR low = 4.21). Conclusions. Greater neighbourhood social cohesion appeared to buffer the effects of stressful life events on several domains of adolescent mental health. This potentially presents a target for public health intervention to improve adolescent mental health and behavioural outcomes. Stressful life events [SLEs] in childhood and adolescence are well-established risk factors for the development of later psychiatric problems (Kessler et al., 1997). Exposure to both acute and chronic stressors early in life, ranging from parental separation to exposure to violence or abuse, can have adverse, long-term impacts on mental health. For example, SLEs, including maltreatment, have been linked to increased depressive and anxiety symptoms (Michl et al., 2013), as well as antisocial behaviour (Lansford et al., 2002), conduct disorder (Jaffee et al., 2005), hyperactivity (De Sanctis et al., 2012), and suicidal ideation (Afifi et al., 2008). These early mental health problems can persist into adulthood (Naicker et al., 2013), carrying additional risks of experiencing substance use problems, lower educational attainment, difficulty maintaining stable employment, and difficulty developing healthy and meaningful interpersonal relationships (Colman et al., 2009). Given their potential long-term impacts on mental health and psychosocial development, early life SLEs present a major public health issue for which we need to identify potential factors which may improve the long-term outlook for children exposed to these early-life stressors. Social cohesion, defined as the level of connectedness between individuals living in close geographical proximity (Sampson, 2003), may present such a factor, and has been linked with better physical and mental health (Araya et al., 2006;Echeverría et al., 2008). One mechanism through which this may operate is via the strength of community ties, which may create environments where health-promoting behaviours are reinforced, and negative behaviours (e.g. vandalism, drinking in public spaces) are discouraged (Kawachi and Berkman, 2001;
Background This study explores the experience of hospital-based healthcare for people who are vulnerably housed or homeless. Literature suggests that the healthcare system is either inaccessible to or fails to meet the needs of certain groups. Data outlines barriers to care for Indigenous Canadians, members of the LGTBQ* community, persons experiencing ongoing or historical trauma, persons using substances, and those experiencing homelessness or who are vulnerably housed [1][2][3][4][5][6][7][8][9][10]. Thirty-five thousand Canadians are homeless on any given night and 235,000 Canadians experience homelessness in a year [11]. Average life expectancy for homeless persons is estimated at between 42 and 52 years [12,13]. Between 44 and 60% of people who experience homelessness will use illicit substances in their lifetime [11,14,15]. The primary objective of the Canada Health Act, the foundational legislation of Canada's universal healthcare system, is "to protect, promote and restore the physical and mental well-being of residents of Canada and to facilitate reasonable access to health services without financial or other barriers" [16]. This would imply that health services must be tailored to eliminate avoidable barriers to access, and should actively seek to protect, promote and restore the health of all Canadians, including the most marginalized. Data in this study derives from a mixed-methods study funded by the South East Local Health Integration Network exploring palliative care services for the homeless and vulnerably housed. In this study, "homelessness" or "vulnerably housed" includes those who are living out-of-doors, in substandard conditions not fit for human habitation, in temporary or unstable accommodations, in shelters, and those who are at risk of losing their existing housing [17]. --- Methods --- Study design A survey was used to obtain data from health and social services providers and interviews were conducted with key informants from this group. A survey along with focus groups and in-depth interviews collected data from participants with lived experience of homelessness. See Fig. 1 for an outline of all data collection and Additional file 1 for survey tools and interview guides. Ethics approval was obtained through Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board. --- Participants and sampling. Health and social service providers A survey was distributed by email widely to organizations throughout the SELHIN who work with people experiencing homelessness . The survey included questions about the participant's organization, scope of practice, and thoughts and opinions on the provision of care to people experiencing homelessness with an explicit emphasis on end of life care. All questions were multiple choice, however they all had free text spaces in which participants could include comments and other considerations. 136 HSSPs responded to the survey. Following survey collection, community agencies identified KIs who had been employed by the organization for at least 1 year and had provided front line services . KIs were a mix of urban, rural and semirural. Research assistants conducted ten in-person or telephone interviews using a semi-structured interview guide Fig. 1 Data collection exploring themes that were developed following review of the survey findings: challenges in accessing palliative care services by people experiencing homelessness; the impact of substance use on service acceptability and access; education and resources required to support people experiencing homelessness and/or substance use; and recommendations for system changes. Interviews lasted approximately 60 min and were conducted by research assistants or by the principle investigator . --- Persons with lived experience Persons with experience of homelessness were recruited from agencies mandated to provide services to those who are vulnerably housed. Snowball sampling was used to recruit further participants. Inclusion criteria was a past or present history of homelessness. Participants completed a survey collecting basic demographic variables, information on education, housing, service use, self-rated mental and physical health, and substance use. Support was provided when literacy was a concern. Six focus groups were held with 2 to 7 participants, along with 4 in-depth interviews, all lead by research assistants or by the principle investigator, most of whom had experience working with people with lived experience of poverty and substance use. The intent was to run focus groups exclusively, but due to scheduling challenges, several participants were interviewed individually. Both interviews and focus groups used the same semi-structured interview guide exploring themes related to how homelessness affects access to palliative care, the impact of substance use, thoughts on death and dying, and recommendations to improve end of life care for vulnerably house people. Theme saturation was reached prior to the end of the data collection period in that no new themes were appearing in the interviews and focus groups, however due to the importance of including the community in the research process, the desire to have a large number of participants with lived experience, and the fact that other participants had already been recruited and expected to participate, two final focus groups were completed. 31 people participated, the sessions lasted between 2 and 3 h and were held at Street Health Centre in Kingston. Participants were given a stipend of 50$ for their participation. --- Data Analysis. KI interviews and focus groups were audio-recorded and transcribed. HSSP surveys free text responses provided sufficiently rich written content to include with the transcripts. While initially reviewed for findings relevant to palliative care, transcripts were reviewed a second time by both researchers seeking themes related to access to and experience of healthcare services in general. Transcripts and survey responses were reviewed, analyzed and coded as one data set by 2 independent researchers, both of whom have experience with vulnerable populations as well as research and clinical experience in trauma and violence informed and EOHC approaches. Themes were then reviewed collaboratively by the 2 researchers to ensure consensus. The analysis was informed by directed content analysis [18,19] which can be used when there is existing theory about a phenomenon, but this theory is incomplete. It is notable that for the PWLE, very few had had first-hand personal experience with end of life care, and much of their discussion related to the provision and receipt of healthcare services more broadly. Data was sufficiently rich that there were extensive findings related to healthcare experiences in general, and that theme saturation was felt to be reached for these themes despite this not having been the original focus of the project. --- Results Quantitative data will be reported in detail elsewhere [20,21]. Sociodemographic data relevant to the discussion is included in Table 1. --- Qualitative findings Participants' experience of either accessing services themselves, or of assisting clients to access services, was predominantly negative. Four themes were highlighted by participants: experiences and consequences of stigma and shame when accessing healthcare; lack of accountability of the healthcare system towards equity seeking populations; inflexibility of the healthcare system; and positive experiences that warrant discussion for what they teach us about potential improvements. --- Experience of stigma and shame when interfacing with the healthcare system The experiences of stigma among PWLE were overwhelming. In some cases, the stigma was so painful that it superseded any health complaints, previous trauma, or other concerns a patient might have. Stigma was by far most pronounced in the context of current or documented history of substance use, even if substance was remote, compounding the sense of shame and stereotyping. Box 1 Participant Quote I'll share a story that was shared with me. [This woman] had suffered a brutal rape. Horrific. Absolutely horrific. She was a woman in her late 40s. Lived on the street throughout her whole life, back and forth. She was telling me her story. She needed to share it […] and she didn't cry. Not one tear when she talked about the abuse that she endured [..]. She wept when she talked about how she got treated at the hospital because she was bleeding so profusely and she flinched at a needle and the comment from the nurse was made to her: 'Well look at your arms, as if you have a problem with needles". That weighed so heavily and this is when the woman broke down. The abuse was horrific but she had almost been marinating in that level of violence and abuse all her life. The devastating part of it all was the shame she felt from the hospital […] because of her I.V. drug use. . Stigma, or anticipated stigma, had important consequences for health. As reported by others [22], PWLE avoid care due to past negative experiences. They might leave in the middle of a care session, even removing intravenous lines in order to extricate themselves from intolerably stigmatizing situations. PWLE were often isolated from support networks when in care because their support networks, coming from the same social contexts, were equally stigmatized and occasionally overtly excluded by healthcare providers. PWLE lacked trust towards healthcare providers due to past experiences which had significant impacts on their care seeking behavior and likelihood of following through on provider recommendations. Finally, PWLE and HSSPs had many examples in which they felt that complaints were not taken seriously, often due to a history of substance use, which caused them to fear that they would be unable to obtain appropriate care. Box 2 Participant Quote It actually got so bad that I actually unhooked my IV and left the hospital and didn't go back. […] I just couldn't. believe it. It was scary actually because when I unhooked that IV, I thought to myself: 'What am I doing here?' That' s how scared I was that they actually set it off in me that I started to think 'Oh god, now they are going to do this to me and now they're not gonna take proper care of me.' . The presence of an advocate from outside of their social network , had a significant impact on the care patients received. While this was more likely to enable them to receive care in a respectful and appropriate manner, it further highlighted the stigma they experienced when their advocate was not present. --- Lack of accountability of the healthcare system towards equity-seeking populations Participants felt that the healthcare system was not accountable to the people it served. Participants articulated the responsibility of healthcare providers to provide excellent, empathic care to everyone who presents, regardless of their socioeconomic status, substance use history, or life circumstances. Healthcare providers were felt to have a lack of understanding of the impact of social determinants of health, ongoing trauma and past adversity on people's health and healthcare presentations. Examples were given of clients asked to leave the hospital because of the way they dressed or smelled. Participants felt that healthcare providers lacked knowledge around harm reduction, around the root causes of substance use and adversity, and that they appeared to lack empathic or compassionate curiosity towards patients and the difficulties they encountered. "You know it' s all those kids we think about when we hear these horrific news stories of abuse. They went into the foster care system and then we don't think of them again but that little kid ends up being the 30-year-old. with a criminal record and that little kid ends up being a woman who' s prostituted for the last 10 years." . Respondents wanted to see medical practitioners whose priority were their patients rather than status, job security, or finances. They also felt that having peers with lived experience of substance use, homelessness, or other equity-related challenges operating within the healthcare system would help make care more accountable and acceptable for them and others. --- Inflexibility of the system HSSPs described a healthcare system that was not tailored to meet the needs of their clients. The system was described as designed by middle class people for middle class clients, expecting conformity to the system rather than tailoring the system to the differing needs, desires and challenges of patients. Examples included the requirement that housing be obtained before treatment could be initiated when housing was not an option; a lack of flexibility for patients who might show up late or miss appointments; and a lack of openness to a harm reduction approach that might allow patients to receive a tailored form of treatment in the context of substance use rather than being dismissed out of hand. --- Positive experiences While the majority of the discussion, both from HSSPs and PWLE, focused on negative experiences of care, there were also some positive encounters related to healthcare experiences where providers upheld dignity, autonomy and choice for patients, where they provided flexible, non-judgemental services in spaces where clients felt welcomed. Participants used terms such as "trust" and "compassionate" to describe these positive experiences of care. Box 5 Participant Quotes "She' s a nurse here yes. I adore her. I adore her. I respect her and I trust her and she' s the sweetest girl that. I've ever hadthe sweetest medical care person I've ever had take care of me. She' s just amazing […] Yeah. like she' s very very thorough and she' s very compassionate. I just, oh my heart' s with her, I love her. Yeah." "They are really like, hey we like the atmosphere of this place. We like that people here treat us really nice and we're people. We feel loved. There are paramedics here who are, you know, assisting us. Um we really feel safe in this space and like there' s no judgement and we want to keep coming back here." . --- Discussion Our findings echo the negative experiences and resulting impacts on health and healthcare access of equity seeking populations described in other studies, including the homeless and vulnerably housed [1-8, 10, 23]. These include care avoidance, stigma, inflexibility of the current system, unmet healthcare needs and a lack of harm reduction philosophies integrated into the delivery of care. While listening to the voices of our participants is key to understanding the inadequacies of our system, listening to these voices also presents an opportunity for change. There is small and increasing body of literature on Equity-Oriented Health Care and trauma and violence informed care in healthcare settings but these theories are rarely applied to hospital-based medicine and do not address hospital-based medicine for the homeless or vulnerably housed. We believe that the articulation of EOHC [24][25][26] as an approach may present us with a road map and tools to respond to the concerns of homeless and vulnerably housed clients, particularly with respect to their concerns about discrimination, stigma, and inflexibility of the system as articulated in our study and others [23,27]. EOHC rests on 3 components. The first is trauma and violence informed care that recognizes the prevalence of past and ongoing trauma in people's lives and acknowledges the way in which trauma affects people's physical and emotional health, interpersonal relationships, and ability to access care. TVIC rests on 5 principles [28]: [1] Trauma awareness and acknowledgement; [2] Safety and trustworthiness; [3] Choice, control and collaboration; [4] Strengths based and skills building; and [5] Cultural, historical, and gender issues. The principles of TVIC are echoed in participants' narratives. Participants shared the great burden of past and ongoing trauma that people facing homelessness and substance use have experienced. The need for safety and trust were explicitly articulated, as well as the challenges in developing that trust. Choice, control and collaboration are the antithesis of the stigmatizing and dismissive care that participants too often received in healthcare encounters, which is neither strengths based nor skills building. Finally, much literature supports the ongoing impact of gender, ethnicity, indigeneity and history on access to care [4,6,8]. The second component of EOHC is harm reduction. Most of the literature examining PWLE of homelessness identify substance use and the healthcare system's response to substance use, as significant concerns [2,14,15,23]. Harm reduction encompasses programs, practices, policies and philosophies that aim to reduce the harms of substance use, viewing substance use as a health issue rather than a moral failure [26]. Participants feel that healthcare providers view their substance use as making them less worthy of dignified care and less valuable as human beings. A harm reduction approach requires a fundamental shift in how the healthcare system interacts with people who use substances. In addition to formal policies and programs, such an approach requires us to see the people behind the substance use, to recognize their dignity, experiences, trajectories, and challenges. Cultural safety is the third component of EHOC. Culturally safe care is particularly important in the Canadian context where Indigenous people continue to experience the negative effects of current and past colonization [6,29] but would be relevant in any context of human diversity. Culturally safe care explicitly addresses inequitable power relations, racism, discrimination, and effects of historical and current inequities within health care encounters [29]. More than just an attitude, culturally safe care requires knowledge of history and of the root causes and consequences of inequity on the part of healthcare providers. Finally, EOHC requires that an approach to and delivery of care be developed with input from all stakeholders, including people with lived experience, but also all members of the healthcare team from physicians and nurses to janitors and receptionists. A recent study has found that cross sector collaboration that provides integrated health care improved barriers to access and also enabled self-managed care [30]. These changes require leaders to engage not only with providers who are already advocates for equity-seeking populations, but also with those who are not. EOHC presents a unique opportunity to build partnerships among professional and patient groups that rarely mix outside of clinical care and allows a system to be responsive to the local needs of its population. Communities with higher rates of substance use, higher percentages of Indigenous clients, or recent loss of employment with increase in precarious housing could meet the challenges and opportunities presented by EOHC differently. --- Limitations The HSSPs in our study were almost all involved in providing care to homeless and vulnerably housed individuals and were generally self-described advocates for this group. Our study might have benefitted from integrating the voices of HSSPs who are not specifically committed to working with equity-seeking populations. Additionally, our data was originally collected in the context of work on palliative care. Further questions specifically targeting other healthcare experiences might have yielded additional information. Nevertheless, our findings are amply supported by the extensive verbal and written discussions around healthcare services from all study participants and align with findings in the literature as well [1-3, 10, 23]. --- Conclusions There are two key messages in our findings: The first is that the care we are providing to our most vulnerable clients is not adequate and does not meet the professional standards of accessibility, universality, and patientcenteredness. An often-quoted line by Dr. Edward Trudeau from the 1800s proposes that the physician's role is "to cure sometimes, to relieve often, to comfort always". Our findings demonstrate that for certain groups we may be failing on all three counts. Our second message is that we believe there is a way to raise our healthcare system to this standard, and that EOHC, developed locally and tailored to place, provides a road map from where we can begin. EOHC requires a cultural shift within our profession, away from the standardized one-size-fits-all care we have become used to and back, perhaps, to a more versatile, creative way of delivering care that many of us aspire to. It will require team work in hospitals and clinics, changes to curriculum in medical and nursing schools and continuing professional development. It will require those who hope to be leaders in this field to have compassion and understanding for colleagues for whom this is more difficult. Finally, it will require us to not only listen to, but to hear and to see the patients before us in all their strength, complexity and occasional despair, to consider the trajectory and meaning of their lives within our broader society, as well as our own privileged place therein. --- --- Additional file Author's contributions MM was the principle investigator for this study. She conceptualized the study, developed the research tools, and coordinated the survey, focus groups and interviews. She reviewed and coded the qualitative and survey data, and contributed to the drafting and editing of the manuscript. EP was the co-investigator. She provided consultation during the research process, reviewed and coded the qualitative and survey data, and led the writing of the manuscript. Both authors read and approved the final manuscript. --- --- --- Competing interests The authors declare that they have no competing interests. ---
Background: People experiencing homelessness are often marginalized and are known to face barriers to accessing acceptable and respectful healthcare services. This study examines the experience of accessing hospital-based services of persons experiencing homelessness or vulnerable housing in southeastern Ontario and considers the potential of Equity-Oriented Health Care (EOHC) as an approach to improving care. Methods: Focus groups and in-depth interviews with people with lived experience of homelessness (n=31), as well as in-depth interviews of health and social service provider key informants (n=10) were combined with qualitative data from a survey of health and social service providers (n=136). Interview transcripts and written survey responses were analyzed using directed content analysis to examine experiences of people with lived experience of homelessness within the healthcare system. Results: Healthcare services were experienced as stigmatizing and shaming particularly for patients with concurrent substance use. These negative experiences could lead to avoidance or abandonment of care. Despite supposed universality, participants felt that the healthcare system was not accountable to them or to other equity-seeking populations. Participants identified a system that was inflexible, designed for a perceived middle-class population, and that failed to take into account the needs and realities of equity-seeking groups. Finally, participants did identify positive healthcare interactions, highlighting the importance of care delivered with dignity, trust, and compassion. Conclusions: The experiences of healthcare services among the homeless and vulnerably housed do not meet the standards of universally accessible patient-centered care. EOHC could provide a framework for changes to the healthcare system, creating a system that is more trauma-informed, equity-enhancing, and accessible to people experiencing homelessness, thus limiting identified barriers and negative experiences of care.
Background The World Health Organization defines suicidal behaviour as "a range of behaviours that include thinking about suicide , planning for suicide, attempting suicide and suicide itself " [1]. Suicidal ideation, suicidal planning, and attempted suicide represent important risk factors for suicide mortality in the general population [1][2][3]. Globally, 703,000 suicidal deaths are recorded annually, representing more than one in every 100 deaths in 2019 [4]. Among young persons aged 15-19 years, suicide was the third leading cause of death among girls and the fourth leading cause of death in boys, after tuberculosis -as at the end of 2019 [4]. About 79% of the world's suicides are recorded in low-and middle-income countries , with the Africa region recording the highest rate -11.2 per 100,000 people [4,5]. However, nationallevel representative data on suicidal ideation, planning, and attempts, and their associated factors are still less than enough to support research-informed intervention and prevention efforts and programmes, particularly in LMICs, including those in Africa [1]. Evidence from a recent global systematic review and meta-analysis suggests varying 12-month prevalence estimates of suicidal behaviour among adolescents: suicide ideation = 14.2% , suicidal planning = 7.5% , and suicide attempt = 4.5% [6]. Comparatively, pooled regional rates drawing on published data from the World Health Organization Global Schoolbased Student Health Survey indicate that the 12-month prevalence estimates of adolescent suicidal behaviour in LMICs are higher in the African region: ideation = 21% , suicidal plan = 23.7% , and attempt = 16.3% [7][8][9]. Notably, the 12-month median prevalence estimate of adolescent self-harm in Southern sub-Saharan Africa -where Namibia is located - is comparable to the overall estimate across the sub-Saharan region (16.9% [IQR] = 11.5-25.5%) [10]. Factors associated with suicidal behaviour among adolescents in LMICs, including those in Africa, have been found to exist at the individual-level [7,8,10,11], family-level [8,10,12], interpersonal-level [8,10,12], school context [8,12], and the broader community-level -e.g., community violence or war, poverty [10,[12][13][14]. The multi-layered and multi-contextual nature of the factors associated with adolescent suicidal behaviour can be understood within the socio-ecological model. The socioecological model provides a helpful framework to understanding and preventing suicidal behaviour in that the model considers an integration of populationspecific and general risks and protective factors [1,15,16]. Within Southern sub-Saharan Africa, South Africa remains the only country with relatively large data on adolescent self-harm and suicidal behaviour [10,[17][18][19]. Hence, towards contributing evidence to addressing the dearth of research on adolescent suicidal behaviour in Southern sub-Saharan African countries , several scholars have explored and published evidence from the WHO-GSHS data accessed from some countries in the sub-region: Botswana [20], Eswatini [14], Malawi [21], Mauritius [22], Mozambique [23,24], Zambia [25], and Zimbabwe [26]. Thus far, it is only the published study by Peltzer and Pengpid drawing on the Namibian WHO-GSHS data that reports evidence on suicide attempt [27]. In other words, no peer-reviewed publication is available on the secondary analysis of the country-specific prevalence and correlates of suicidal ideation, planning, and suicide attempt in the nationally representative school-going adolescents sample of the Namibian WHO-GSHS data. It must be noted that the 2013 Namibian WHO-GSHS data is the latest available dataset from the country's participation in the survey. Namibia has a youthful population, as persons aged 17 years and younger constitute 43% of the general population [28]. The mean years of schooling in Namibia is 7.2 years [29]. Primary to middle school education, which stretches from grade 1 through 7, is compulsory for all children between the ages of 6 and 16 years, while secondary education remains optional [30]. Namibia is an English-speaking Southern sub-Saharan African country classified as an upper-middle-income country [31], with a Medium Human Development Index rank of 139 [29]. In 2019, the country's age-standardised, all ages suicide rate was 13.5 per 100,000 people, higher than both the Africa rate and global average [4]. Thus, the present study seeks to analyse the 2013 Namibian WHO-GSHS data on non-fatal suicidal behaviours to address the following research aims: 1]. --- Methods --- Context and data source This study drew on data from the 2013 Namibia WHO-GSHS. The survey was conducted by WHO and the Centers for Disease Control and Prevention of the United States in collaboration with the Government of Namibia [32]. The data is publicly available and has been accessed freely for the current study from the WHO website [32]. --- Study design and sampling The WHO-GSHS is a national representative crosssectional survey conducted to assess behavioural health factors amongst school-going adolescents in participating WHO member countries [33]. Data was collected using a validated self-administered questionnaire consisting of items assessing a wide range of personal lifestyle , family relationships , and school environment variables [33]. Prior to the data collection, ethical approval was sought from relevant authorities as well as consent from parents/guardians of adolescents. The 2013 Namibia GSHS targeted students in grades 7-12 which is typically attended by students aged 13-17 years. A twostage sampling approach was used for data collection. In the first stage, a cluster of schools were randomly selected from a list of all schools in Namibia using a probability proportionate to enrollment size method. This process resulted in a list of eligible and nationally representative schools. The second stage involved the random sampling of classrooms from these eligible schools, making all students in the selected classes eligible to participate. Students who volunteered to participate were then handed an anonymised computer scannable questionnaire to complete. A total of 4531 students aged 13-18years participated in the Namibia GSHS. The school response rate was 100%; that of students was 89%, while the overall response rate was 89%. The reporting of the current study was guided by the recommendations of Strengthening the Reporting of Observational Studies in Epidemiology [34]. --- Measures --- Outcome variables Three domains of suicidal behaviour namely suicide ideation, planning and attempt were considered as the outcome variables for this study. The three domains were each assessed using a single-item question. Suicide ideation was measured with, "during the past 12 months, did you ever seriously consider attempting suicide?", suicide planning with, "during the past 12 months, did you make a plan about how you would attempt suicide?" and attempt with "during the past 12 months, how many times did you actually attempt suicide?". The responses for suicide ideation and plan were, "Yes = 1" and "No = 0" while that of attempt required students to indicate the number of times with "0", "1", "2 or 3", "4 or 5", and "6 or more times". Guided by the GSHS recoding procedures [33], the suicide attempt variable was treated as a binary variable assigning "0" to no attempt and "1" to one or more attempts. For the purposes of examining factors responsible for repeated suicide attempts, the attempt variable was subsequently reclassified into three categories namely, no attempt, one-time attempt, and repeated attempts. Therefore, responses of adolescents who chose "1" for suicide attempt was recoded into the one-time attempt category and that of adolescents who chose "2 and above" into the repeated attempts category. Coding of variables included in this study are presented as supplementary material . --- Exposure variables Participants' demographic characteristics, mental health and lifestyle factors, interpersonal factors, school-level factors as well as family-level factors were selected as exposure variables in this study. Besides performing bivariable analyses to assess the relationships between the exposure and outcomes variables, the selection of these exposure variables was based on evidence from previous studies within sub-Saharan Africa drawing data from the WHO-GSHS [14,27,[35][36][37]. Examples of the specific factors include age, school grade, gender, cannabis use, loneliness, anxiety, and alcohol use. Complete details of the variables, their groupings, survey questions and coding can be found in supplementary material . --- Statistical analyses Reporting of the statistical analyses plan in this study is informed by the Statistical Analyses and Methods in the Published Literature guidelines [38]. We limited the analysis to participants aged 12 to 17 years for two reasons: first, most of the participants were within this age range, and second, data on the precise ages below and above this age range were not available. Age and gender distribution across the analytical sample is presented as supplementary material . Statistical analyses involving univariate, bivariate and multivariate testing were conducted in Stata 14.0 statistical software . Prior to conducting these tests, clusters, stratification, and sample weights characteristic of data collected with complex designs were adjusted to account for possible analytical errors and make appropriate inferences [39]. Following from this, univariate analysis computing frequencies, proportions and relevant 95% confidence intervals of all study variables was done. Chi-square tests of independence were performed to examine the association between the exposure and outcomes variables. Multivariate analyses with logistic regression and multinomial logistic regression were then conducted in two steps. Step 1: Logistic regression was performed to assess the sociodemographic factors and exposures variables associated with suicidal ideation, planning and attempt. Step 2: We performed a multinomial logistic model assessing the factors associated with repeated suicide attempt . Given the presence of sparse data only exposure variables that reached statistical significance in the binary logistic regression models were included in the multinomial logistic model. Age, gender and school grade were included as covariates in the multivariable analyses. Statistical significance was set at an alpha of 0.05. Adjusted odds ratios are reported for each logistic model and adjusted relative risk ratio for the multinomial logistic model. Given the arbitrary nature of the alpha , the significance of each analysis was also determined based on CIs and their clinical importance [40,41]. Missing responses or incomplete data on key variables were excluded from the final analysis . --- Results --- Participant characteristics A total of 3152 in-school adolescents comprising 1380 male and 1772 females were included as analytical sample in the secondary analysis for this study. The average age of these adolescents was M = 15.1 and most of them were in grade 10-12 . Regarding mental health and lifestyle factors, approximately 14 and 13% of adolescents had experienced loneliness and anxiety respectively and about 36% had spent 3 or more hours a day engaged in leisure time or sedentary behaviour. Additionally, about 29% and 5% had use alcohol and cannabis in the past month, respectively. --- Prevalence estimates --- Bivariate associations As shown in Table 2, most of the exposure variables were significantly related to suicidal behaviour. Of the two demographic factors included in the study, only, school grade was significantly related to suicidal behaviour. A high proportion of students in grade 6-9 reported suicide ideation, plan and attempt during the previous 12 months. Among the mental health and lifestyle factors, loneliness, anxiety, alcohol use and cannabis use were all significantly related to suicidal behaviour. Leisure-time sedentary behaviour was significantly related to only suicidal ideation . Under the interpersonal factors, being sexually active and physical fight were related to suicide behaviour. Having a close friend was significantly related to suicide plan. Concerning school-level factors, a significant proportion of students who were physically attacked, truant and were victims of bullying reported experiences of suicidal behaviour. Some significant associations were also found among the family-related factors and suicidal behaviour. Notably, food insecurity and --- Multivariate associations Tables 3 and4 show the findings of the adjusted logistic regression models and multinomial models respectively. --- Logistic regression As presented in Table 3, the most important exposure factors contributing to the increased odds of all the domains of suicide included grade in school, loneliness, physical attack and parental intrusion of privacy. Other variables including age, anxiety, leisure-time sedentary behaviour, cannabis use, physical fight, bullying victimisation, and parental monitoring were related to at least one form of suicide behaviour. For instance, sedentary behaviour was associated with increased the odds of only suicidal ideation , while parental monitoring was related to reduced odds of attempted suicide . Interestingly, although gender, alcohol use, being sexually active, number of close friends, truancy, peer support, parental understanding, and food insecurity were associated with each of the suicidal behaviour outcomes, none of these associations reached the desired threshold of statistical significance . --- Multinomial logistic regression As shown in --- Discussion This cross-sectional study sought to describe the 12-month prevalence estimates of suicidal behaviour and associated factors among school-going adolescents aged 12-17 years in Namibia, drawing on the 2013 Namibian WHO-GSHS. In summary, the current study has two key findings. First, comparable estimates of suicidal behaviour were found between boys and girls, with about 2 in 10 adolescents reporting suicidal ideation, planning, or attempt in the previous 12 months. Approximately, 1 in 20 students reported repeated attempted suicide during the previous 12 months. Secondly, physical attack victimisation, bullying victimisation, loneliness, and parental intrusion of privacy were associated with increased likelihood of suicidal ideation, planning, onetime suicide attempt and repeated attempted suicide. Adolescents in higher grades , compared to those in lower grades , had reduced relative risk of reporting one-time attempt or repeated attempts at suicide. Cannabis use was associated with increased relative risk of repeated attempted suicide. The 12-month prevalence estimates of suicidal behaviour found in the current study are comparable to those found generally among school-going adolescents within the African region, where estimates of suicidal ideations, planning and attempt range from 20.1 to 29% [7][8][9]. Beyond the similarity of the estimates of this study to those reported earlier from other Southern African countries -e.g., Eswatini, Malawi, Mozambique, and South Africa - [14,19,21,23,24], the evidence regarding comparable estimates between boys and girls have also been reported from other Southern African countries and sub-Saharan Africa in general [10,21]. The crossnational similarity of the estimates is to be expected, considering that the data are drawn from the WHO-GSHS conducted in the respective countries using the same measures and definitions of included variables. The lack of significant gender differences in the estimates of suicidal behaviour could be pointing to the possibility that, perhaps, the factors presenting as risks are comparably difficult for both school-going boys and girls in Namibia. This finding could also be supporting the emerging evidence that self-harm and suicidal behaviour are not neatly differentiated between boys and girls within sub-Saharan Africa [10]. Taken together, this evidence could be underscoring the importance of universal intervention and prevention efforts focused on suicidal behaviour in both school-going adolescent boys and girls in Namibia. We found physical attack victimisation, bullying victimisation, loneliness, and parental intrusion of privacy to be significantly associated with increased likelihood of suicidal ideation, planning, one-time suicide attempt and repeated attempted suicide. Cannabis use was associated with increased relative risk of attempted suicide. Global and regional systematic reviews and metaanalyses [7,8,10,12,42] and recent primary studies drawing data mainly from the WHO-GSHS [14,21,23,36,37,43,44] have also identified these factors to be critical in school-going adolescent suicidal tendencies and behaviours. Further, within the lens of the socio-ecological model, our finding supports the multi-factorial, multi-layered and multi-contextual nature of the factors associated with suicidal behaviour among adolescents [15,16]. In broader terms, the identified key associated factors of suicidal behaviour among adolescents in the current study support evidence in the literature that exposure to interpersonal social adversities and relational difficulties occurring in the family, school or within peer relationships contribute to suicidal behaviour among young people [10,45,46]. Besides being a common phenomenon, bullying victimisation has a strong positive association with involvement physical fighting and other interpersonal adversities among school-going adolescents in Namibia [47,48]. Considered as an antecedent of suicidal thoughts and behaviour, social adversities result in internalising problems, often leading to lowered self-esteem, self-blame, and self-dislike, which in turn heighten the vulnerability to self-harming thoughts and behaviours in adolescents [49]. Recent literature is replete with evidence that in both high-income countries and LMICs, lifestyle factors, particularly health risk behaviours and -untreated -mental health problems have strong association with suicidal behaviour among adolescents [11,42,44,[50][51][52]. Although cannabis possession and use are illegal in Namibia, national-level estimates suggest that among school-going adolescents 6.6% boys and 4% girls Our finding that school-going adolescents in higher grades, compared to those in lower grades, have reduced relative risk of reporting one-time or repeated attempts at suicide supports earlier evidence from South Africa [59], but it is inconsistent with a recent finding from Ghana, where no significant association was observed between school grade and suicidal behaviour [60]. Whereas explanations for this school grade difference are not readily clear from the Southern African context, perhaps, in Namibia, curriculum-based functional mental health literacy could be suggested. Among other aims, the Namibian school curriculum seeks "to foster the highest moral, ethical and spiritual values such as integrity, responsibility, equality, and reverence for life" [30]. Maybe, the value of 'reverence for life' -which essentially proscribes and eschews suicidal thoughts and actions [61] -might have been more actionably consolidated in students in upper school grades than those in lower grades. Beyond this speculation, further studies are needed to understand this differentiation of suicidal thoughts and behaviours in terms of school grade in Namibia but also within the general sub-Saharan African context. While the factors showing significant associations with suicidal behaviour are identified, it is also imperative to comment on the variables reported to be important in the literature but showed no significant associations with the outcomes in the current study. Interestingly, in the current study, although alcohol use, being sexually active, number of close friends, truancy, peer support, parental understanding, and food insecurity were associated with each of the suicidal behaviour outcomes, none of these associations reached the desired threshold of statistical significance. While we suspect sparse data to account for the lack of statistical significance, we believe future studies involving relatively large responses to each of these data items will contribute to clarifying the statistical and clinical significance of their associations with schoolgoing adolescent suicidal behaviour in Namibia. This study has identified the key factors associated with adolescent suicidal behaviours to exist mainly at the individual level/ , within school , family , and community context . The implication of the multiecological nature of the key evidence for practice is that intervention and prevention efforts must be designed with a multi-sectoral and multi-layered orientation. For example, while the initiation anti-alcohol and substance use and anti-bullying polices are needed to enhance school social climate, communitylevel training for supportive parenting can be designed for families with adolescents. Similarly, while parents and significant others living with adolescents need to be observant regarding the identification of warning signs of potential adolescent suicidal behaviours, the Namibian Ministry of Education could consider including mental health literacy and help-seeking lessons in the curriculum for high schools -this would contribute to improving help-seeking and self-care behaviours among school-going adolescents at risk of self-harm and suicidal tendencies and other emotional crises, including loneliness, anxiety and depression. --- Strengths and Limitations A critical significance of this study is that it contributes to addressing the problem of dearth research on suicidal behaviour among adolescents in Namibia. Data on adolescent mental health are still insufficient to inform policy, intervention efforts, and prevention programmes in Namibia [62]. In particular, data on self-harm and suicidal behaviours among adolescents in Namibia are less than enough [10,63]. Additionally, this study contributes broadly to advance our knowledge and understanding of the prevalence and associated factors of suicidal behaviour among in-school adolescents in Namibia in that the study draws on a relatively large data accessed from a nationally representative sample. Beyond these strengths, there are noteworthy limitations of this study. The findings of the study should be generalised with caution, as the key evidence may not necessarily apply to out-of-school adolescents. While the study sample excludes students who were absent on the day of the survey, evidence suggests that the average annual dropout rate in Namibia ranges between 3 and 10.4% [64]. The one-time cross-sectional survey design used for the WHO-GSHS implies that the outcome and exposure variables were measured at the same time point, making it impossible to identify sequence, temporal link, and causal relationship between the exposure and outcome variables. Thus, consistent with recommendations by recent studies from of other countries within the continent [10,13,65], future studies using more robust approaches, including longitudinal designs and carefully designed qualitative studies are needed for contextually nuanced understanding of self-harm and suicidal behaviours among adolescents in Namibia and Africa generally. Notably, in the current study, it is not clear why the estimates of suicidal planning and attempt are higher than suicidal ideation. Similar evidence has been reported from other sub-Saharan African countries drawing on the WHO-GSHS data -e.g., Benin, Ghana, Liberia, Malawi, and Sierra Leone [21,37,[66][67][68]. However, this evidence is unconventional, counterintuitive, and not in keeping with the suicide process or pathway model [69,70], which superlatively suggests that, typically, estimates of ideation are expected to be highest, followed by estimates of planning, then estimates of attempt. Perhaps, the use of a single-item measures in the WHO-GSHS to assess these outcomes could account for this unconventional finding. As cautioned elsewhere, the use of single-item measure to assess suicidal ideation, planning, and attempt must be interpreted cautiously, as single-item measures typically result in misclassification of suicidal behaviours and higher estimates [71]. Specially, we also suspect that the lower estimate of suicidal ideation -relative to suicidal attempt -in the current study may be due to impulsivity. There is evidence from high-income countries to suggests that impulsivity may result in the onset of suicidal attempt among adolescents, even in the absence of prior suicidal ideation [72,73]. --- Conclusion The evidence of this study adds to the literature on non-fatal suicidal behaviour among school-going adolescents in Africa, but also underscores the marked estimates of suicide ideation, planning, and suicide attempt among school-going adolescents in Namibia. The relatively high prevalence estimates and multi-layered correlates contribute to our understanding of adolescent suicide in Namibia. The evidence highlights the importance of paying more attention to addressing the mental health needs of school-going adolescents in Namibia. While the current study suggests that further research is warranted to explicate the pathways to adolescent suicide in Namibia, identifying and understanding the correlates of adolescent suicidal ideations and non-fatal suicidal behaviours are useful for intervention and prevention programmes. --- Data Availability The datasets used and/or analysed during the current study are freely available from the WHO website: https://extranet.who.int/ncdsmicrodata/ index.php/catalog/478. The 2013 Namibia GSHS questionnaire is also available freely on the WHO's website: https://extranet.who.int/ncdsmicrodata/index. php/catalog/478#metadata-questionnaires. --- Abbreviations --- --- Supplementary Material 1 Authors' contributions ENBQ, NEYD, and KOA conceived, designed and organised the study. ENBQ, and NEYD curated the data and performed the statistical analysis; and ENBQ and KOA contributed to the interpretation of the data. ENBQ and NEYD drafted the manuscript, and KOA critiqued the manuscript for important intellectual content. All authors read and approved the final version of the manuscript. ENBQ serves as guarantor for the contents of this paper. --- Funding The authors received no financial support or specific grant from any funding agency in the public, commercial or not-for-profit sectors, for the research, authorship, and/or publication of this article. --- --- --- ---
Background While adolescent suicidal behaviour (ideation, planning, and attempt) remains a global public health concern, available county-specific evidence on the phenomenon from African countries is relatively less than enough. The present study was conducted to estimate the 12-month prevalence and describe some of the associated factors of suicide behaviour among school-going adolescents aged 12-17 years old in Namibia. Methods Participants (n = 4531) answered a self-administered anonymous questionnaire developed and validated for the nationally representative Namibia World Health Organization Global School-based Student Health Survey conducted in 2013. We applied univariate, bivariable, and multivariable statistical approaches to the data.Of the 3,152 analytical sample, 20.2% (95% confidence interval [CI]: 18.3-22.2%) reported suicidal ideation, 25.2% (95% CI: 22.3-28.4%) engaged in suicide planning, and 24.5% (95% CI: 20.9-28.6%) attempted suicide during the previous 12 months. Of those who attempted suicide, 14.6% (95% CI: 12.5-16.9%) reported one-time suicide attempt, and 9.9% (95% CI: 8.1-12.1%) attempted suicide at least twice in the previous 12 months. The final adjusted multivariable models showed physical attack victimisation, bullying victimisation, loneliness, and parental intrusion of privacy as key factors associated with increased likelihood of suicidal ideation, planning, one-time suicide attempt, and repeated attempted suicide. Cannabis use showed the strongest association with increased relative risk of repeated attempted suicide.The evidence highlights the importance of paying more attention to addressing the mental health needs (including those related to psychological and social wellness) of school-going adolescents in Namibia. While the current study suggests that further research is warranted to explicate the pathways to adolescent suicide in Namibia, identifying and understanding the correlates (at the individual-level, family-level, interpersonal-level, school context and the broader community context) of adolescent suicidal ideations and non-fatal suicidal behaviours are useful for intervention and prevention programmes.
Background Preventive behaviors are critical to managing epidemic infectious disease; yet these behaviors have been far from universal in response to COVID-19. Factors that affect preventive behavior were widely studied [1][2][3][4][5][6][7][8], also with regard to mental health [9][10][11], and perceived susceptibility [12]. For example, beliefs about the consequences of preventive behaviors such as social distancing and face mask wearing were significant predictors of engaging in such activities [3]. However, the effects of social and cultural environments on individuals' susceptibility to COVID-19 have seldom been investigated. In fact, social support could play a significant role in affecting individual perceptions of susceptibility. One important feature of the Covid-19 pandemic has been its impact on the social environment. Public policies seeking to limit the spread of COVID-19 including social distancing and quarantines have increased stress from social isolation. This has highlighted the importance of social support [1,9] which relates to help in caregiving and situational coping. However, how different circumstances embody the adoption of preventive behaviors against COVID-19 remains to be assessed. In this paper, we compare adoption of preventive behaviors in two countries whose COVID-19 experiences have been very different-Italy and South Korea. Both are similarly sized; Italy has 61 million and South K33orea 51 million inhabitants [13]. However, as of June 7, 2021, Italy has had 4,232,428 confirmed cases and 126,523 deaths, while South Korea has had 144,637 cases with 1974 deaths [14]. How prevalence and prevention are linked remains to be assessed. Preventive behaviors are key to reducing the spread and impact of COVID-19. Since the World Health Organization declared a pandemic on March 11, 2020, there are still high numbers of confirmed cases and deaths with new variants such as Omicron and BA 5. To alert the public to keep practicing preventive measures, it is important to note that people respond to health threats according to their conception of how susceptible they are [15,16] and how severe the damages are [17,18]. Susceptibility and severity are two key factors of preventive behaviors, according to the Health Beliefs Model [17,18] and the Theory of Planned Behavior [19]. Comparing Italy and South Korea provides an opportunity to examine the role of susceptibility in countries with widely differing severity as well as allowing study of susceptibility in relation to the social and cultural environment. Italy demonstrates high COVID-19 severity, in contrast to South Korea with its small number of cases and deaths. As to susceptibility, individual's perceived susceptibility is a key determinant of engagement in preventive behaviors. For instance, US residents who underestimated the risk of contracting COVID-19 showed lower preventive behaviors [20]. Another important feature of the COVID-19 pandemic has been the importance of social support for coping [21]. Studies on the effects of social support suggest that it could play a role in responding to a health threat [22][23][24]. Social support could have differing effects, and this study seeks to examine varying effects of social support on individual COVID-19 judgements. The paradoxical thing was that government-imposed lockdowns, quarantines, limited gatherings, and closed public places hindered the efficacy of social support when it was needed most [25]. In the present study, a formative measure incorporating four aspects of social support taken from the perspective of the prospective recipient was developed tapping emotional support, relational support, private support, and instrumental support [22,[26][27][28]. Social support from significant others is associated with better health [23], with enhancing psychological well-being in both Western and Asian societies [29], with improving physical activity and quality of life in Korea [22,30,31]. Thus, in the present research, a positive main effect of social support on adoption of preventive behaviors for COVID-19 is expected. --- Comparison of Italy and South Korea We chose to study Italy and South Korea-two countries with cultures exhibiting strong social bonds. In Italy, attachment to the family is a key value, more constant than any other cultural value [32]. Family gatherings are frequent and provide both emotional and economic support) [33]. The value of familism is the cherished jewel of Italian identity. For critics, however, familism resembles egotism, in that it is only extended to one's next of kin. This can often prevent identification with larger entities in society, such as one's community, region, class, or nation. This can prevent or inhibit organized social action on these levels [32]. Family solidarity and relationships are also essential part of the culture in South Korea, although family culture in the country has changed in response to rapid modernization [34]. Social support from family and friends is manifest in the long-term commitment to the in-group, fostering strong relationships and loyalty [35]. In Korea, however, high levels of collectivism, social bonds, and belongingness beyond the family play key cultural roles [36][37][38]. Individuals from different cultures show different levels of willingness to seek social support, as well as different perceived benefits from social support from those close to them [34,39]. Individuals in Asian cultures are generally less willing to seek explicit social support for coping with their stress than those in European cultures [40], and they are less aided by social support [39]. Given the importance of social ties in Italy and South Korea, social support should impact individual response to the COVID-19 pandemic. Previous studies have found that social support has acted as a coping mechanism in crisis situations as well as a factor in resilience following disaster [41,42]. For example, higher levels of social support are correlated with improved health behaviors [43] and sleep quality under quarantine during the COVID-19 pandemic [44]. Moreover, social support is negatively associated with individuals' negative emotions, such as anxiety, depression, and loneliness during COVID-19 [11,41,43,44]. While expectations for providing social support are more broadly based in South Korea, people in that society are less willing to seek such support; so no expectation is advanced regarding the relative impact of social support on preventive behavior in the two countries. --- Risk perception regarding COVID-19 Severity [45,46] and susceptibility affect perceived risk [47], and they can be criteria for decisions on preventive measures [15,16,48]. Generally, according to the Health Belief Model , perceived susceptibility and perceived severity are the primary factors that affect proactive health behavior. Demographic and psychosocial factors such as age, social support, personality, selfefficacy, knowledge, and education are also predictive of health behaviors [17,49]. The appraisal of severity is a collective judgement [50], but the appraisal of susceptibility compares the ego with all relevant others, so many people will consider themselves not to conform to the general trend. The perception of severity leads to assessments of humankind and its inclination to yield to a particular virus or any other disease agent. The perception of susceptibility involves the same assessment, as well as a number of self-assessments that are likely to differ from the assessments that others have formed of the subject [50]. These self-assessments tend to be both idiosyncratic and better protected in the subject's mind and are therefore not as open to social influence as other factors. Moreover, the consideration of susceptibility can be subject to the optimistic bias effect [51]. People who think that they may easily be infected and fear the consequences will show more preventive behaviors than those who think they are safe or that suffering would be tolerable. This should be tested in relation to the COVID-19 outbreak. More importantly, given the stark differences in confirmed cases and mortality rates between Italy and South Korea, we expect different levels of perceived social support and for it to play different roles in the two societies. The devastation of Italy and its extremely high death toll leads us to expect high levels of perceived severity as a collective judgment. However, individual assessment of the likelihood of getting infected oneself varies. Because severity is much higher in Italy, it may be more salient and have a greater impact in that country. We investigate the pattern of susceptibility and its impact on preventive behaviors in Italy versus South Korea. Of particular interest is the question of whether perceived social support moderates the effect of individual susceptibility on adoption of preventive behaviors toward COVID-19. --- Methods We conducted cross-sectional surveys in South Korea and Italy during the late fall 2020. The sampling frame in both countries was adults aged 50 years or older. As risk perception is closely associated with demographic factors, especially age [52,53], we decided to focus on the older population who are objectively at greater risk for serious infection from COVID-19. In this way, differences in the patterns and logistics of social support [22] can be better controlled as well. In South Korea, participants were contacted in November 2020 by a leading survey firm that has access to a representative, country-wide panel using random sampling. In Italy, participants were recruited through snowball sampling from November to December 2020. The ethics committee at the Ewha Womans University in Seoul confirmed the study was outside the committee's jurisdiction. The participants were 50 to 89 years old in Korea and from 50 to 83 years old in Italy . In both countries, female respondents outnumbered males . --- Key measurements Social support, as noted earlier has multiple definitions leading to varied operationalizations [26]. Overall, social support can be classified into emotional and instrumental support. Emotional support describes the impact a person has on his/her relatives and friends by speaking with them and listening to them. This relates to advice seeking, and it occurs in relation to health and in many social environments [22,54]. Instrumental support focuses on the tangible, particularly physical assistance such as the assistance for those who are bed-ridden and thus may need help eating, performing hygiene, and receiving medication, and services such as transportation [27]. Finally, support can also tap close social ties. An individual would usually not prefer to have his/her health status discussed before the public [55][56][57]. Health is inherently a private matter, and inclination to keep health matters private limits the available social influence and social support to close ties such as one's family [58]. In this study we used the existing six-item social support scale [27] in the context of COVID-19 crisis . The scale used the format "How often is each of the following kinds of support available to you if you need it?" with a 5-point response scale . Six items were: "Someone to help you if you were confined to bed;" "Someone to take you to the doctor if you needed it;" "Someone to share your most private worries and fears with;" "Someone to turn to for suggestions about how to deal with a personal problem;" "Someone to do something enjoyable with;" and "Someone to love and make you feel wanted. " Preventive behaviors with regard to COVID-19 were measured using 16 items rated on a 5-point Likert scale , modified from extant studies [58,59] were: "Avoided travel novel coronavirus infected areas, " "Washed hands with soap, hand sterilizer, and water, " "Used disinfectants, " "Avoided touching your eyes, nose, and mouth with unwashed hands, " "Avoided eating outside of the home, " "Stayed home when you were sick, " "Covered your cough or sneeze with a tissue, then throw the tissue in the trash, " "Avoided close contact with people who are sick, " "Wore a face mask, " "Avoided public transport, " "Avoided social events, " "Avoided going out in general, " "Avoided going to hospital or other healthcare settings, " "Avoided crowded places, " "Avoided contact with people who have a fever or respiratory symptoms, " and "Intend to comply with the government's recommended actions" . Perceived susceptibility was measured with four items rated on the same 5-point Likert scale as preventive behaviors. The items were: "Then novel coronavirus will spread widely in South Korea/Italy, " "I am more likely to get the novel coronavirus than other people, " "I believe I can protect myself against the novel coronavirus, " and "I believe I can protect myself against the novel coronavirus better than other people" . The perceived susceptibility scale was modified from that used in previous studies [60][61][62]. Perceived severity was measured using two items rated on the same 5-point Likert scale as preventive behaviors. The items were: "My health will be severely damaged if I contract the novel coronavirus, " and "I think the novel coronavirus is more severe than the flu" . The perceived severity scale was modified from those used in previous studies [60][61][62]. --- Analysis plan We used a hierarchical regression approach. The first block included demographic and health factors. The second block added severity and susceptibility. Then, the third block included social support, followed by the country factor in the fourth block. The fifth and sixth block included two way interaction terms and a three way interaction term, respectively. That is, age, gender, education level, economic status, health condition, diagnosis of family members, and diagnosis of friends were used as covariates. Diagnosis of family members was checked with the question, "Has anyone in your household been diagnosed with COVID-19?" and diagnosis of friends was measured with the question, "Has anyone else you know been diagnosed with COVID-19?" --- Results Table 1 presents the descriptive statistics for the control variables used in the analyses. The Koreans reported higher level of economic status and health condition than Italians: participants responded that their economic Table 2 shows bivariate correlations among key variables. As predicted, positive relationships were found between perceived susceptibility and preventive behaviors and between perceived severity and preventive behaviors . Those with high levels of perceived susceptibility and severity regarding COVID-19 were more likely to engage in preventive behaviors. Social support was positively associated with preventive behaviors . Overall, Italians showed higher levels of perceived susceptibility , t = -11.65, p < .001). Koreans showed higher levels of perceived severity than Italians , t = 6.60, p < .001). It is noteworthy that compared to the similar range of standard deviations for susceptibility , the variation in severity appeared larger for Italy, indicating a wider range of scores . --- Perceived social support during the COVID-19 pandemic: Italy vs. South Korea We examined whether Italians and Koreans differed in terms of perceived social support. Overall, perceived social support was significantly different between Italians and South Koreans = 10.33, p < .001, effect size d = 0.54). South Koreans felt significantly higher levels of perceived social support than their Italian counterparts. This held true in all six items: "Someone to help you if you were confined to bed" , t = 10.78, p < .001, effect size d = 0.57); "Someone to take you to the doctor if you needed it" , t = 9.41, p < .001, effect size d = 0.50); "Someone to share your most private worries and fears with" , t = 5.85, p < .001, effect size d = 0.31); "Someone to turn to for suggestions about how to deal with a personal problem" , t = 3.78, p < .001, effect size d = 0.20); "Someone to do something enjoyable with" , t = 11.91, p < .001, effect size d = 0.63) and "Someone to love and make you feel wanted" , t = 10.56, p < .001, effect size d = 0.56). --- Social support on preventive behaviors in relation to susceptibility and country To examine the moderating role of social support, hierarchical regressions were run. In the first block, age, gender, education level, economic status, health condition, diagnosis of family, and diagnosis of friends were included, which explained about 17% of the total variance . When the second block included severity and susceptibility, R 2 increased from .174 to .280 . The third block added social support, which accounted for 28.8% of the total variance . The addition of country in block 4 increased R 2 from .280 to .350 . Two-way interaction terms accounted for 35.4% of the total variance . The last block included the three-way interaction terms of susceptibility, social support, and country, which together explained 35.6% of the total variance . The regression model showed no multicollinearity with the variance inflation factor , ranging from 1.11 to 2.38, and no correlation problem was seen between the residuals . As the final hierarchical regressions show in Table 3, severity , susceptibility , social support , country , a two-way interaction term between susceptibility and country , and a three-way interaction term among susceptibility, social support, and country statistically significantly predicted preventive behaviors. Among covariates, age , gender , economic status , health condition , and diagnosis of friends were significantly associated with preventive behaviors. That is, females, older people, those of low economic status, those in better health condition, and those with friends with COVID diagnoses were more likely to engage in preventive behaviors. More importantly, the results showed a significant three-way interaction effect . That is, the interaction between susceptibility and social support was different in the two countries. Figure 1 shows the two patterns in Italy and South Korea. In Italy, the positive effect of social support was stronger for those with high susceptibility, who also demonstrated a significantly increased association between preventive behaviors and higher levels of social support . However, the association did not hold for persons with low susceptibility; for them, high social support did not go along with preventive behaviors: 4.47 vs. 4.49 . By contrast, in South Korea, the positive effect of social support was stronger for those with low susceptibility. When individual's perceived social support was low, no significant difference was seen in preventive behaviors between those with higher susceptibility and those with lower susceptibility . The beneficial association of social support appeared stronger for those with low susceptibility in South Korea, increasing from 4.04 to 4.23 . --- Discussion This study compares Italy and South Korea and demonstrates that perceived risk is a key factor in predicting individuals' preventive behavioral decisions. Social support was at different levels and produced different effects in the two countries. In Italy, a country with individualistic culture, the positive effect of social support was stronger for those with high susceptibility, while the boosting effect of social support was greater for those with low susceptibility in South Korea, a country with collectivistic culture. First, women, older people, those with low economic status, those in better health condition, and those with friends with a COVID-19 diagnosis were more likely to engage in preventive behaviors. This result is somewhat consistent with previous studies of COVID-19 and H1N1 preventive behaviors. Females and older people were likely to have high risk perception and this factor was associated with preventive behaviors [62,63]. Second, we found perceived susceptibility and severity of COVID-19 were positively associated with preventive behaviors, which is in line with previous studies [64,65]. In the COVID-19 situation, those expecting to be easily infected and those fearing to suffer serious damage were more likely to engage in preventive behaviors. Despite the unprecedented and universal impact of COVID-19 across borders, the results point to the importance of individuals' beliefs. When a health threat approaches, we ask ourselves how likely we are to catch the disease and how much we might suffer from it. South Koreans reported higher levels of perceived social support than their Italian counterparts. Italy was the epicenter of the COVID-19 outbreak in Europe, recording extremely high mortality. COVID-19 was called Italy's largest crisis since World War II [66]. Stark differences were noted between the Italian and South Korean responses to the crisis and outcomes [67]. Although both countries have a national healthcare system including universal healthcare coverage, emotional and instrumental support during the pandemic were quite differently perceived. For Italians, the lowest social support was observed for the item: "Someone to help you if you were confined to bed" . Similarly, the item "Someone to take Fig. 1 Three way interaction effect among susceptibility, social support, and country you to the doctor if you needed it" also had quite low ratings compared to the Korean data . The items for emotional support were slightly higher, such as "Someone to turn to for suggestions about how to deal with a personal problem" and "Someone to share your most private worries and fears with" . Our numbers correspond to the dramatic reports from the dramatic situation in Lombardy in Northern Italy. The different levels of perceived social support between Italy and South Korea highlights the importance of social support in a crisis. Individuals can cope and reduce the perceived severity of upsetting events when they have social support, boosting their protective mental health [28,68]. Moreover, social support is negatively associated with anxiety and plays a role in protecting negative emotions [28]. Thus, social support from significant others, friends and family can improve individuals' preventive behaviors toward COVID-19 , as well as a supporting good emotional regulation [23,28,69]. However, we find that in the crisis that took place in Italy, availability of social support was limited which, in turn, reduces crisis management and deters timely response. Although individuals desperately need social support to cope with the crisis, limited access to social support only aggravates the difficult situation. Those results are in line with previous studies that found that social support is positively associated with health behaviors and mental health . More importantly, this study found a significant threeway interaction effect among perceived susceptibility, social support, and country. That is, the effect of social support in relation to susceptibility was different across the two countries. For Italians, a person who feels him/ herself highly susceptible will increase preventive behaviors, if there is a lot of social support. However, if there is a low level of susceptibility, additional social support does nothing, as shown in Fig. 1. On the other hand, for South Koreans, those with a low level of susceptibility perform more preventive measures than people with a high level of susceptibility if there is a lot of social support. With low levels of social support, there was little difference between those with high and low susceptibility. The effect of social support was stronger for those with low susceptibility. It is noteworthy that the boosting effect of social support was greater for those with low susceptibility in South Korea, while the beneficial effect of social support was stronger for those with high susceptibility in Italy. That is, in the case of South Korea, social support appeared to help those with low susceptibility to take more preventive behaviors in response to COVID-19. A recent study revealed that those who were unsure of their risk for COVID-19 infection were not concerned about community spread, and they did not understand the disease enough to be fearful about its effect [65]. Those who underestimate their susceptibility are the ones who need additional help or a trigger to adopt on appropriate preventive behaviors. Due to the importance of motivating those who are less concerned about their susceptibility, the results in South Korea indicate the necessity of social support, which can take the form of both instrumental and emotional support in crises. By contrast, in Italy, social support was more meaningful only for those with high susceptibility. Those with high susceptibility are those who are already equipped to perform preventive behaviors due to their pre-existing beliefs. Figure 1 implies possible health disparities during the crisis in Italy. Even with the additional help of social support, those with low susceptibility were not motivated to take up preventive behaviors. These results are in line with previous studies showing a widening of health disparities in crisis situations [70][71][72]. For example, during the COVID-19 pandemic, health disparities occurred by race/ethnicity, gender, and socioeconomic status, resulting in differences in screening, occurrence, treatment, and mortality [70]. Moreover, existing health disparities were widened by regional inequalities in healthcare resources and disease incidence in the COVID-19 pandemic in China [71]. Italy and South Korea differ in cultural values , which may cause the different role of social support between two countries. The different roles of social support between Italy and South Korea accord somewhat well with a recent study comparing Taiwan and US [1]. A comparison of the effects of early government communication during the COVID-19 pandemic showed effects of perceived susceptibility on preventive behaviors, moderated by perceived empowerment. That is, Taiwanese had higher perceived government empowerment than Americans, and the perceived government empowerment increased preventive behaviors through intrapersonal empowerment. The results show that during a public health crisis, individuals depend on broader information, such as government communications to achieve empowerment. Thanks to relatively successful and effective government communication in Taiwan, Taiwanese who had high government empowerment showed more appropriate behavioral actions. However, no such link was found for Americans, who felt less empowered. These results have theoretical and practical implications. First, extant research has largely focused on the role of cognitive factors . The current study points to the effects of social and cultural environments in relation to susceptibility to health threats. Second, the results highlight the boosting effect of social support, suggesting that increasing perceived social support can promote the preventive behaviors of people who perceive less susceptibility, at least in South Korea. However, the psychological mechanism or cultural differences that can explain why the relationship between perceived susceptibility and social support was different between Italy and South Korea remain to be examined. The limitations of this study should be mentioned. First, it was conducted on a self-report survey using cross-sectional data. Sampling procedures in Italy cannot claim to have been representative of the country's populations. As the COVID-19 pandemic is an ongoing phenomenon with a continual growth of cases and deaths, this could affect the perceived susceptibility and severity of COVID-19 at each specific point in time. Second, social support was based on a self-constructed scale which has not been validated. Neither was any of the measures validated for intercultural comparison. Thirdly, although this study adopted and modified the perceived susceptibility and perceived severity scales from previous studies [60][61][62], their levels of reliability were a bit lower in this study. Thus, further examinations are needed to investigate these scales. Also, considering the negative impacts of some preventive measures such as social distancing, further studies are needed to clarify differential effects of preventive behaviors. Finally, further study is warranted to understand the psychological mechanism or cultural differences that can address why the relationship between perceived susceptibility and social support was different between Italy and South Korea. --- Conclusions This study provides insights into how health communication practitioners can take account of cognitive factors, such as susceptibility and severity, as well as social and environmental factors when developing health messages and campaigns. Results also underscore the role of media and government to inform the public of the real risks with behavioral guidelines to promote preventive behaviors. Most of all, given the critical role of social support as a coping mechanism in crisis situations, societies should mull over ways to increase emotional and instrumental support. --- Predictors Preventive --- --- --- --- Competing interests No competing interests. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: ---
Background The COVID-19 pandemic hit Italy much harder than South Korea. As a way of explaining the different impact in the two countries, this study examines the moderating role of social support on the relationship between perceived susceptibility and preventive behaviors in the two countries. Methods Surveys were conducted in South Korea (n = 1396) and Italy (n = 487) of participants aged 50 to 89 years.South Koreans felt higher levels of perceived social support than their Italian counterparts. As would be expected, greater perceived susceptibility was associated with increased preventive behavior. Furthermore, a significant three-way interaction effect was found for perceived susceptibility, social support, and country. For Italians, a person who feels him/herself highly susceptible will increase preventive behaviors, if there is a lot of social support. On the other hand, for South Koreans, those with a low level of susceptibility perform more preventive measures than people with a high level of susceptibility if there is a lot of social support.This study provides insights into how cognitive factors, such as susceptibility and severity, as well as social and environmental factors can be taken into account, and the public be told the real risk and given behavioral guidelines when a pandemic is approaching. Given the critical role of social support as a coping mechanism in crisis situations, societies should mull over ways to increase emotional and instrumental support.
Introduction Obesity is one of the leading risk factors for global mortality and the exponential increase in the prevalence of obesity among children and adolescents has been projected to reach 91 million by 2025 [1]. Published literature indicates that adolescents with obesity have higher chances of progressing to persistent obesity in adulthood [2]. Wadman et al. [3] reported that in the USA, 77% of patients hospitalized due to COVID-19 complications had conditions attributable to overweight and obesity. This implies that obesity is both a major risk factor that may be linked to other physical health conditions and a potential effect of lockdown [4]. In Australia, one in every four adolescents is overweight [2], making Australia one of the top 10 countries with the highest proportion of adolescents with obesity [5]. The prevalence of obesity may even be higher in recent times due to inactivity among adolescents worldwide, resulting from COVID-19 restrictions [6], a condition now referred to as 'Covibesity' [7]. It is therefore paramount to focus on actions that could possibly prevent and reduce the prevalence of obesity in adolescents [8][9][10]. Obesity is associated with increased energy intake and decreased energy expenditure [10]. It is a multifaceted chronic condition with several contributing factors, including medical illnesses, biological risk factors, genetic disorders, eating disorders [10,11], health literacy, cultural background, socioeconomic status and numerous environmental influences [12,13]. Adolescent obesity increases the risk of chronic disease development into and throughout adulthood [14]. Obesity in adolescents impacts all major organ systems and often contributes to morbidity [15,16]. Obesity prevalence in adolescents is also exacerbated by differences in ethnic and genetic backgrounds, which affect body composition and fat distribution [17] and cultural body image standards [18]. The current social climate, particularly within high-income countries, with emphasis placed on health messaging and weight loss within society, creates a stressful state for adolescents with obesity due to weight stigma [19,20]. Adolescents who experience stress related to social ostracization are more likely to rely on food-related coping mechanisms [21]. For example, adolescents with obesity may experience teasing and bullying, which can lead to isolation and inability to make friends [22]. Isolation can negatively affect the mental well-being of adolescents due to deprivation of social interaction needed at this stage of development [23]. The interplay between obesity and psychosocial health may lead to increased levels of stress, depressive symptoms and reduced resilience [11,23]. This is further confounded with SES, which is one of the most potent indicators of overall health [13,24]. Youths from low socio-economic backgrounds tend to have higher rates of obesity compared to other groups [13]. The school is perceived to be at a vantage point in the prevention of adolescent obesity because it provides an opportunity for longer contact hours during school days for successful implementation of interventions [25][26][27]. It is therefore important to consider maximizing the use of school environments in preventing adolescent obesity. The school environment can promote physical activity and healthy eating [10] and has been described as the perfect nexus for teachers, parents and other stakeholders to modify and implement lifestyle, behavioural and nutritional interventions to impede the progress of childhood and adolescent obesity [11,28]. Lambrinou et al. [28] provided a detailed review of effective strategies for obesity prevention via school-based, family-involved interventions and the advantages of the school environment. Nonetheless, research findings indicate that despite the advantages associated with the school environment, the benefits of school-based interventions are questionable and generalized recommendations are difficult to extract and extrapolate [29][30][31]. Studies have mostly focused on stakeholder views on the primary school role in preventing obesity [26,32]. Within the Australian context, school-based prevention programs are not widely implemented in high schools [33]. A recent review reported weak evidence for the efficacy of the interventions and programs identified in Australian high schools, particularly because of the weak link between teacher involvement and modification of the food environment during the interventions [34]. Additionally, most of the Australian adolescent obesity research emanated from the State of New South Wales and no studies were nationwide [34]. It is therefore important to reassess the role of the school in the prevention of adolescent obesity. More importantly, due to the many health implications associated with obesity, it is expedient to target preventive measures rather than a cure. Exploration of school stakeholders' opinions about current priorities, barriers and enabling factors would be beneficial in the global effort to prevent adolescent obesity. Additionally, recommendations from school stakeholders are key to future planning and implementation of effective policies and intervention programs [29]. This study therefore sought to investigate Queensland, Australia school stakeholders' beliefs and perceptions of the barriers and enablers currently experienced by schools and their recommendations for preventing adolescent obesity. The study also aimed to develop a reliable adolescent obesity prevention model based on the findings. --- Materials and Methods Ethics approval for this study was obtained from the James Cook University's Human Research Ethics Committee . --- Study Design A sequential explanatory mixed methods study design was utilised. This design employs a methodical integration of quantitative and qualitative research approaches within a single study to offer detailed explanation of results [35]. Findings from the quantitative phase comprising online surveys aided the development of the interview questions for the qualitative phase [36]. The inherent weaknesses due to bias in both quantitative and qualitative approaches were addressed by triangulating findings from both phases to uncover the best possible explanations for the observed phenomenon [35]. --- Quantitative Phase For the quantitative phase, data were obtained from responses to online survey questions on the perceptions and beliefs of school stakeholders from Queensland Education towards the prevention of adolescent obesity. --- --- Data Collection Instrument A questionnaire comprising closed and open-ended questions was used. The survey questions were categorized into six sections that examined participants' background information, their beliefs, attitudes and perceptions about available anti-obesity policies as well as the barriers and enablers of school-based prevention programs. The survey questions were adapted from two previous studies [38,39]. The questions on beliefs, attitudes and perceptions about obesity were adapted from the study by Price et al. [39], and the questions on anti-obesity policies and school-based prevention programs were adapted from the study by Kennedy et al. [38]. The survey instrument was pilot tested and there was no need to revise any of the questions. The last question in the survey was used to identify those who had interest in participating in the follow-up individual interviews. However, the quantitative findings and the participants' demographic details were utilised in purposively selecting the interview participants to ensure involvement of all participant groups until data saturation was reached [40]. --- Data Analysis Quantitative data were analysed using SPSS version 27. Descriptive statistics in the form of frequencies and percentages were used to identify most occurring perceptions, beliefs and attitudes, barriers and enablers, types of policies and prevention programs used in schools. --- Qualitative Phase --- Data Collection Responses from the quantitative phase were used to guide the development of semistructured open-ended interview questions for the qualitative phase. The interviews were conducted using Zoom cloud meeting between December 2020 and February 2021. Each interview session lasted approximately 30-60 min. Interviews were recorded and transcribed for textual analysis. This phase of the study was intended to foster in-depth understanding of the participants' perceptions on the main enablers and barriers to prevention of adolescent obesity, and recommendations on what they perceive would work best within their context in the prevention of adolescent obesity. The semi-structured interview guide used for this phase of the study is provided in Appendix A. --- Data Analysis The qualitative data were analysed using NVivo 12 plus, guided by inductive thematic analysis approach [40]. Coding and analysis of interview data were performed at two levels: within each case and across the cases [41]. Analysis of the interview transcripts included multiple readings to aid the identification of emerging themes [42]. Transcripts were explored for meaning in participants' words and language. During the iterative coding stage, transcripts were independently assessed by two researchers with different professional backgrounds to widen perspectives . Identified themes as well as patterns of similarities and divergence were discussed and confirmed, with discrepancies resolved in a consensus meeting. The trustworthiness and credibility of findings were established through member checking, and cross matching of emerging themes by the researchers [40]. Verbatim quotations are presented to illustrate the emerging themes. --- Results --- Quantitative Phase Overall, 90 school stakeholders consented to participate in this phase of the study. However, only 60 of them completed the online survey, with a 67% response rate. Table 1 presents the profile of the survey respondents. There were 60 participants, and 47 of them were females. Respondents' roles were heads of department , senior teachers and subject teachers . Most respondents were from public schools , worked full-time and had a Bachelor's degree , while 32% had a Master's degree or higher. Table 2 portrays participants' top three agreement responses and the least agreed response in relation to their perceptions and beliefs about the causes, enabling and hindering factors of adolescent obesity and the health priorities considered in their schools. 2 depicts that majority of the participants indicated poor eating behaviour as the major cause of adolescent obesity, followed by sedentary lifestyle and excessive calorie consumption . Peer pressure was rated as the least likely possible cause of adolescent obesity. --- Participants' Beliefs Most participants believed that adolescent obesity is becoming more prevalent and that having a healthy weight is very important for adolescents. About two-thirds of the participants believed that adolescent obesity is a significant cause of peer rejection, and only 3% believed that only youths who are likely to succeed in a weight loss program should be part of a treatment plan . --- Perceived Enabling Factors in Adolescent Obesity Prevention When asked what they thought could be enabling factors for schools in supporting the prevention of adolescent obesity, 86% of the participants indicated parental support, regular evaluation of available intervention programs and elimination of 'junk' food and beverage machines from schools. Availability of low-calorie healthy lunches during lunch hour was also considered a key factor for preventing adolescent obesity by 86% of the participants. Supportive government policies and ease of access to fitness equipment during recess were also perceived by 78% of the respondents to be enabling factors. Two-thirds of the participants felt that community involvement could also help in enabling the school to prevent adolescent obesity . --- Perceived Barriers to Adolescent Obesity Prevention The majority of the participants felt that busy school timetable was the main barrier to adolescent obesity prevention, followed by shortage of trained staff and insufficient funding , the absence of thoroughly implemented intervention plans and insufficient motivation of learners to participate in obesity prevention programs . Only 32% perceived short school day as a barrier . --- Health Priorities in Schools The study findings in Table 2 show that emotional and mental health were the main health priorities of the school as indicated by 90% of the participants, followed by relational and social skills . Two-fifths of the participants reported physical fitness as a priority and only one participant indicated that tobacco use was the main health focus at their school. Table 3 presents the interventions and strategies currently used by schools for physical health and wellness. The predominant strategy used was health and physical education , 45% of the participants indicated that there were other forms of physical activity programs offered besides HPE. Nutrition education and promotion was reported by 35% of the participants as a strategy used by their school. Nutrition standards for meals and BMI tracking/reporting seemed to be unpopular strategies in schools. --- Qualitative Phase As shown in Table 4, 14 school stakeholders, who were predominantly females and between the ages of 25 and 60 years, participated in the interviews. Eight of the participants were teachers in public state high schools, while six were from independent high schools. The participants had predominantly teaching roles, all of them had at least a first degree and between 3 and 50 years of teaching experience. Thematic analysis of the interview data presented three emergent themes: barriers schools encounter in the prevention of obesity. need for stakeholder collaborations and enabling strategies to improve outcomes. These themes are presented with verbatim illustrative quotes, and each quote is depicted using participant's name and school type . The study findings were used to develop a reliable model for the prevention of adolescent obesity . Table 5 depicts how the quantitative results were clarified and confirmed by the participants' responses in the qualitative phase of the study. 'Some schools have banned certain items like chips, candies or soft drinks. The number of healthy items they sell at the school tuck shop is quite low in comparison to the number of unhealthy items.', Angelica, PS 'Another way is to make deliberate effort to provide healthy options at the canteen', Jesse, IS Good nutrition strategy for improvement includes targeted efforts to improve the tuck shop menu. Boundaries need to be set on what can or cannot be sold. --- Government Barriers Insufficient funding for intervention programs was thought to be a barrier by 78% of the participants. 'I think schools are trying to do the best we can with very limited funding' Janelle, IS. 'Lack of funding for those activities that could prevent adolescent obesity is a challenge', Jesse, IS This implies that though schools could develop health promotion activities to tackle adolescent obesity; effective implementation of such activities is mostly hindered by lack of funding. Excessive calorie consumption indicated as a barrier by 93% of the participants. 'Students habitually walk across the road to purchase big serves of junk food like fries and soft drinks at a fast-food joint near the school', Sonia, IS This shows that some schools are located near fast food outlets, which increases the amount of 'junk' food students would normally consume. Presence of 'junk' food machines in schools was indicated by 86% participants as a major inhibiting factor. 'I think there's also a drink machine, where students can get soft drinks and I think that's run to raise money for a program of some kind, but it doesn't seem necessary to have the machine.' Angelica, PS Fund raising using junk food and unlimited purchases from vending machines inhibits progress in preventing adolescent obesity. Lack of thorough implementation of intervention programs as reported by 75% of participants reduces the amount of support available to enhance efficacy of the programs. 'It is not clearly stated in their roles as teachers, they do not necessarily feel the burden to deal with adolescent obesity', Jesse, IS 'I think there has been a lot done within Australia to create awareness of where students can go to get help for depression and if they're struggling, stigmatized or being bullied, I think, more than ever, over the last 10 years. But I don't know if we're doing enough in the same respect for adolescent obesity and physical exercise', Samantha, PS From the participants' perspective, mental health has thrived because government-supported programs are put in place while adolescent obesity is not seen as a burden; hence, the lack of unified efforts in dealing with it as a national health concern. --- Barriers Schools Encounter in the Prevention of Obesity Participants indicated that the prevention of adolescent obesity is usually hindered by four major stakeholder groups, namely parents, students, the school and the government. --- Barriers Associated with Parents Participants felt some parents encouraged their children and wards to engage in unhealthy eating habits and lifestyle. Parents were considered too busy to prepare and pack healthy lunches for their teenage children. Participants also reported that adolescents indulge in excessive screen time more at home, where they are usually unmonitored by parents. 'If parents habitually indulge in junk food, children are going to mimic that, and they are going to think it's fine and the pattern will be hard to remedy in later life.' Ruth, IS 'Ideally parents should have a great contribution because they are the ones who buy the food for the household, which the students are eating two-thirds of the time, with only one main meal a day at school. But a couple of things come into play, a lot of parents are time poor. I think it's much more a thing that parents prepare lunchboxes for little ones, but once they [kids] get to high school, it stops.' Jessica, PS 'I think that unmonitored screen time mostly is even more rampant at home than at school, because you know these kids sit down and play video games and go on social media or whatever it is [called].' Jesse, IS --- Barriers Associated with Students Students' unwillingness and lack of motivation to be active, especially if they are overweight, were seen as major obstacles. Students were also reported to spend enormous amount of time on electronic devices during recess instead of engaging in physical activities. 'Students shy away from activities like swimming because they feel ashamed of being seen in swimmers if they are overweight or obese', Sonia, IS 'During the breaks you find every child looking at a screen, whether sitting in a group or alone [when] they could spend more time talking and engaging in other activities', Raphael, PS 'There's not much that would make the children think it's worth their while to participate unless they really enjoyed healthy eating or they enjoyed physical activity, before they came to the school', Janelle, IS. The participants also felt that adolescents often had part-time jobs that gave them access to pocket money, which gave them 'buying power' to engage in unhealthy food choices as observed in the foods they ordered from the canteen or the lunches from home. 'In the school canteen, students pass through a line and there's a section of cold food, [another] section of hot food, and they get given whatever they want, so students could choose six pieces of pizza, if they wanted to and get handed that with no problem [monitoring]' Angelica, PS 'Quite often the kids have part time jobs, or they've got pocket money, and they can buy whatever they choose to.' Jessica, PS 'We can't really control the food that they bring in, or what they choose to eat, or the choices they make once they leave the school grounds, either.' Janelle, IS Barriers Associated with the School A major barrier identified at the school level was the busy or 'crowded curriculum', which makes it challenging to offer physical activities regularly. 'Doing those kinds of activities with students outside the curriculum is a challenge because the curriculum is too saturated already.' Sonia, PS 'The push in the curriculum to do this and this and this and this, in addition to the core subjects. And so, those blocks of time that used to be for physical education or Wednesday afternoon sports for 70 min have been eroded.' Jessica PS Schools' close proximity to fast food outlets was another identified barrier. Unhealthy school tuck shop menu where serves and portions are not controlled for students was also seen as a major drawback, particularly in public schools. To help students make better food choices, participants felt subjects like Home Economics and HPE could be made compulsory. 'Even if the school only provides healthy options, some of them [students] will sneak down to the service station and get soft drink and stuff like that or bring it from home.' Jessica, PS 'The number of healthy items they sell at the school tuck shop is quite low in comparison to the number of unhealthy items.' Angelica, PS 'I know there's government policies that they've implemented as to what tuck shop sells but I'm not aware of the specific rules.' Samantha, PS 'Home Economics, HPE and food and nutrition subjects should be made compulsory subjects in school from grade seven to grade 12.' Sage, PS Lack of trained staff to facilitate physical activities was also identified as a barrier, especially in public schools. Interestingly, most of the participants indicated that there were inconsistencies across Queensland schools, and they were unaware of available national policies on prevention of adolescent obesity in schools. 'I took my Year 7s down to the oval for HPE and that was really awful. I said I wouldn't take them again because I had kids rolling around the hill and then somebody got kicked and started crying, and then somebody else got pushed over and started crying. And I didn't really have the experience to manage that situation very well so we're not going down to the oval again.' Rebecca, PS 'In Queensland it's up to the individual school, except if there is a system I am not aware of. I wouldn't say that if we're following clear intervention policies that I have been made aware of them', Janelle, IS --- Barriers Associated with the Government The major barrier identified at this level was laxity on the part of the government in implementing policies that help curb adolescent obesity. "We still see a lot of food that are being sold that are not very healthy in the tuck shops, so we need stricter guidelines and policies from the government and making sure that all schools, not just some schools, all schools [adhere] to the policies where they don't sell unhealthy foods in the tuck shop. They [government] also make home economics policy where all students learn about nutrition and about healthy eating habits.' Sonia, IS Participants felt that the lack of funding and resources, particularly in public schools, make it challenging to run activities for the prevention of adolescent obesity compared to other health issues like mental health. 'I think particularly public schools, which is where I've had most of my experience, are trying to do the best we can with very limited funding. I think we're trying but there's definitely work still to be done and there's only so much you can do without additional resources to help.' Janelle, IS --- Stakeholder Collaboration Participants indicated that collaboration between three major stakeholders-parents, the school and the government, is needed. They stressed that it is not the sole responsibility of the school to facilitate prevention of adolescent obesity. --- Parental Support Participants indicated that parents, as a stakeholder group, could encourage active transport to school and sign up their children for sporting activities at school. Responses also pointed to limiting screen time at home, checking and monitoring packed meals for school lunches. Parent educating and modelling healthy habits was emphasized as a way of ensuring that children do not mimic unhealthy habits 'Parents can also show support by making sure their children play school sports or signing them up to a sporting club as well.' Vanya, PS 'I think parental support is very important, because formative years of a child begins at home; particularly limiting screen time, nutrition, what they eat.' Raphael, PS 'Parental role in educating and modelling good eating habit is vital in encouraging a healthy lifestyle for their children, and other aspects such as their diet, to support prevention of adolescent obesity.' Jesse, IS --- School Role Participants indicated that the school could play a more active role in this collaboration by ensuring that there is whole of organization commitment to health and nutrition policies and programs, participation in strategic planning and decision-making to advance the prevention of adolescent obesity. Participants expressed the need for the school to educate and empower parents to make better lifestyle choices for themselves and their children. 'But what we really need to be doing as a school is giving education and empowering parents to make better choices and backing them up on those choices.' Angelica, PS 'I think educating parents can be very helpful. You know, like we get news items going up from school and things of concern. It goes to all parents, so they don't feel like they are being targeted or victimized or in any way ostracized.' Raphael, PS They also emphasized the importance of educating students and promoting compulsory sporting/physical activities in all schools. 'In terms of promoting physical activity, some schools I've been at in the past have made sports compulsory. At my current school, the kids didn't do sports for the past year 10. So, I had to do PE [with the students] at lunchtime.' Sage, PS 'Wherever possible in our syllabus and curriculum if you can talk about that. That's helpful as well. I mean I had the opportunity to do that when I taught HPE. So, I used that as a platform to talk about the importance of a good diet.' Raphael, PS 'Participation is not that optional. Everyone is supposed to participate in extracurricular activities.' Hebron, IS --- Government Role Participants indicated that the government could enact policies that ensure uniformity in educational programs and policies across schools and implement phone usage policy to regulate recreational screen time in schools. 'The government is the one that streamlines the curriculum, and provides funding for schools, obviously, the government can play a good role to support schools', Jesse, IS. 'The government should come up with strict policy on what the Tuck Shop can sell and what they can't.' Sonia, IS 'To address screen time, I think there should be more firm rules around that. My opinion is that the phones should be banned for the entire day at school unless emergencies, but I know there's a lot of debate in the education community about that.' Janelle, IS Participants pointed out that the government should reintroduce programs like 'Smart Moves', which actually helped to promote and enforce physical activity programs, as well as acceptable food policies that clearly state what types of food and drinks are allowed or prohibited in schools. Participants also felt that the government could enforce media advertisement of healthy food instead of junk food and subsidize or give free vouchers for sporting activities to make them affordable for young adults. 'The government could reintroduce Smart Moves, which actually helped to promote physical activities like doing sports and exercise.' Sage, PS 'I think a lot of that is dictated by government regulations. I remember a few years ago they categorized food into red and yellow and green. I think those colour codes meant red was the junk food and you could have the occasional red day, and green was healthy foods. Yellow was not as healthy as green but not as bad as the red, and I think you're allowed to sell a certain amount of red, but not a lot. And you could have the occasional red day where there would be more junk food available. I'm not sure if it's followed in this school but I'm pretty sure that there has to be regulations they follow as to what they sell at the school, as well as for prevention of obesity.' Samantha, PS 'The government can pay the media to advertise healthy food instead of junk food.' Ruth, IS --- Enabling Strategies to Improve Health Outcomes Participants in this study recommended standardized implementation of enabling strategies. Major areas of focus included nutrition, physical education activities and overall health and well-being. --- Nutrition Strategies The nutrition strategies related majorly to tuck shop menu. Participants suggested a move towards national implementation of food and beverage policy within schools to monitor types of food and drinks students can/cannot bring to school. Other nutritionrelated strategies to improve outcome included the provision of free fruits and breakfast to students, compulsory nutrition subjects and constant communication of dietary messages within schools. 'The school administration should look into what food options are healthy for the tuck shop.' Hebron, IS Physical Education/Activity Strategies Participants recommended social, enjoyable and non-competitive sporting activities to improve outcomes in adolescent obesity prevention. Government funding and provision of accessible fitness equipment/facilities would also improve outcomes. 'I think social sport, that's compulsory and all it's there for all levels, not necessarily competitive sport but more like social sport would be great.' Chantelle, IS 'Students have to choose across the four terms at least something to do with physical activity, but that's not the system at the moment as students just get to choose whatever they want to', Angelica, PS Overall Health and Well-Being Strategies Participants pointed out that formal health talks 'with health experts like dieticians and nutritionists can be organized to educate students and their parents about healthy lifestyle. Another strategy identified was targeted government-sponsored advertisements on healthily eating. 'I think definitely speaking about these things and holding sessions where health experts like dieticians and nutritionists can come in and educate the students,' Ruth, IS 'I think government plays a part in making sure that advertising continues to happen regarding what a healthy diet looks like, and how much physical activity people should be getting.' Vanya, PS Commitment to prevention of adolescent obesity requires tripartite collaboration between the school, parents and government to adequately tackle the identified barriers and enhance health outcomes. --- Discussion This study investigated school stakeholders' beliefs and perceptions about the barriers and enablers currently experienced by schools and their recommendations for preventing adolescent obesity. Our findings are summarized into a model for the prevention of adolescent obesity in schools . Barriers currently experienced by the school towards preventing adolescent obesity were explored and classified into four stakeholder levels: the students, parents, school and government. Our findings show that stakeholder collaboration is the missing link that is essential to dealing with adolescent obesity. Recommendations of what the parents, the school and the government can do in their respective roles to help prevent adolescent obesity were identified. The participants emphasized the adoption of strategies that have the potential to increase physical activity, reduce screen time and promote healthy eating habits among adolescents. Overall, the findings show that the school cannot deal with the burden of adolescent obesity alone, and that complimentary collaborative efforts from all three major stakeholder groups are required to combat adolescent obesity. The school has been identified as an ideal place for adopting and implementing strategies to tackle adolescent obesity, because majority of adolescents attend school, providing an opportunity for longer contact periods during school days [25][26][27][28]. However, it is quite evident in this study that obesity prevention efforts will be futile if the adolescents themselves are not motivated. The participants made a general observation that adolescents are often unwilling and uninterested in taking part in physical activities. This finding confirms a recent report that emphasizes adolescents' indifference to physical activity if it competes with other interests such as hanging out with friends or doing screen-based activities [12,43,44]. Nonetheless, if the teachers are unaware of relevant policies and have insufficient skills needed to facilitate outdoor activities, they will not feel confident to assist with promoting activities that aid prevention of adolescent obesity. It is evident in this study that teachers lack skills in facilitating outdoor physical activities and are also not aware of what policies to follow for such activities. Teachers need to be aware of current policies and actively engage in strategic planning and decision-making processes within the school for better implementation of policies [45]. School administrators are encouraged to involve teachers in the development of such policies and frequently give policy reminder talks at least once per semester to foster awareness and facilitate implementation of the strategies set in the policies. Our findings that school tuck shops seem to be encouraging unhealthy eating habits among adolescents are consistent with the report by Ronto and colleagues [46] that an unhealthy tuck shop menu promotes bad eating habits among students. The majority of participants in this study indicated that students generally purchase lunches and snacks from the tuck shop, and that whatever is available on the menu majorly influences the students' dietary choices. A New Zealand study [47] indicated that close to two-thirds of students purchased lunches from the tuck shop, which was worrisome because of the high calorie, fat dense and high sugar food options. Another study [48] assessed the compliance of Australian schools by state and territory to set policies guiding the tuck shop menu; the findings indicated that Western Australia was the most compliant with 62% of menu items qualifying for healthy choices, while Queensland was in the bottom three. In this same study, high schools offered more unhealthy food and at lower cost than healthy salads. Even though most tuck shops are run by private conveners and predominantly profit oriented [46], healthy eating policies should be adhered to. The onus therefore lies with the schools to ensure that whatever is sold in the tuck shops is healthy. It is also imperative that the government ensures consistency in the adherence to regulatory guidelines across schools in Australia for healthier tuck shop options. The proximity of schools to fast food outlets and unhealthy packed lunches from home by students were also identified as barriers in this study. Grier and Davis [49] established that proximity to fast food outlets has a negative impact on the weight of adolescents, particularly those in urban areas and those from low -socio-economic backgrounds. When healthy food items are available at an affordable price, students are more likely to buy them, but their final decision is likely to be influenced by lots of other factors besides price. Studies have shown that early-life experiences in family systems that reinforce good dietary habits have a role in promoting healthy eating in future life, and this is considered as one of the fundamental ways of addressing adolescent obesity [50]. It is evident in the current study that unhealthy food is competitively cheaper than healthy food encouraging students to fall into the trap of buying unhealthy food due to its affordability. Government funding is needed for sourcing healthy food items and making them available locally, particularly to disadvantaged populations. Constant review of policies governing health promotion in schools is required. This can be made possible through government-regulated food policies as rightly highlighted by participants in this study. With regards to physical activity, there has been a growing number of students who attend schools very far from their catchment areas [51]. This study has indicated that this makes active transport impossible as parents must drop off their children to school and pick them afterwards. This finding resonates with other studies and indicates that parents could be concerned about the safety of their children, therefore not encouraging active transport to school [44,52]. This implies that schools will need to maximize their physical activity programs to engage students as much as possible to meet the daily requirements for physical activity. The projected [1] increase in obesity in this age group may even be higher looking at the prevailing rate of COVID-19, which has worsened sedentary lifestyle and increased screen time among adolescents recently [6]. This calls for the government to provide more fitness equipment in schools as well as more bikeways and safe footpaths to encourage students to cycle or walk to school. Queensland weather can be hot; hence, the recommendation in this study that the government could provide funding for more facilities like change rooms and showers for freshening up and changing to fresh clothes on such days for students who cycle to school. Obviously, students' access to mobile phones has increased over time and this contributes to the worrisome students' tendency to replace physical activity with games or activities on their electronic devices [6]. It has been established that peers influence each other in being physically active or inactive [53], highlighting the need to consider the impact of peer support in developing future interventions. It was highlighted in this study that strict rules on screen time during school hours and inclusion of more enjoyable non-competitive physical activities could help curb excessive screen time. This result corroborates findings from previous studies and confirm that to get adolescents to be engaged, activities must be enjoyable and pitched at an appropriate skill set level [54]. Interestingly, the participants noted that unhealthy school tuck shop menu, limited funding and lack of trained staff to facilitate physical activities were more prevalent barriers in public schools compared to independent schools. There are significant differences in funding opportunities between these two entities, as public schools are government-funded institutions, while independent schools are privately funded, and this may be a possible reason for the observed differences. The finding also portrays possible differences in the two school environments, likely with different social classes of students and parents. Environment and SES are certainly major contributors to the prevalence of obesity. Higher SES is usually associated with healthy lifestyle behaviour, while low SES is associated with less leisure time, physical activity and the consumption of nutrient-poor, energy-dense diets [11,13,19]. This result indicates that the government needs to do more to better support public schools in combating adolescent obesity. It is important to note that, despite promising initiatives raised by schools, if funding and resources are not available, this can only lead to failure and disappointment because such initiatives cannot be sufficiently implemented or sustained. The novel addition of this study to existing literature is the development of a reliable model that proposes a multi-pronged stakeholder collaborative approach in developing targeted strategies that foster a supportive ecosystem in combating adolescent obesity and enhancing the achievement of more generalizable health outcomes. This is where the tripartite collaboration between the government, parents and school is needed. The government's major contribution would be to promote the development of adequate legislation and ensure its enforcement in order to protect adolescents from the marketing and sale of unhealthy foods. School administrators would need to ensure that appropriate school staff are trained to facilitate fun and engaging physical activities, while parents support their children by role modelling healthy lifestyle choices and monitoring screen time at home. Student mental health and well-being have been reported to be a major priority in schools with persuasive campaigns to normalize asking for help to deal with mental health issues, with promising results in reducing depression among students [55]. As expressed by participants in this study, the same effort can be exerted towards the prevention of adolescent obesity. With its increasing prevalence, adolescent obesity cannot be continuously relegated to the background [5]. The current social climate of 'fatphobia' and use of shame-based communications, particularly within high-income countries, emphasizes the need to implement better support strategies for adolescents with obesity [19]. Adolescents affected by this problem should feel comfortable to get help without feeling any prejudice or discrimination. --- Strengths and Limitations The major strength of this study is the utilization of the views of school stakeholders in a mixed-methods study to develop a model for the prevention of adolescent obesity. However, the findings should be interpreted with caution as the study focuses only on a Queensland, Australia context, which may not be applicable to other settings. Additionally, the development of the model for the prevention of adolescent obesity was primarily based on school stakeholders' perspectives. The perspectives of students, parents and the government were not explored. Furthermore, the collection of data during COVID-19 restrictions limited the response rate and could have caused sampling bias, as participants were only reachable via online platforms. --- Implications for Practice and Recommendations for Future Research Lack of motivation on the part of students and the importance of health education for teachers, students and parents were raised as major areas for consideration in this study. Therefore, the model developed from this study can be used as a guide to support the development of policies and interventions, such as inclusive physical activities for adolescents with obesity and effective strategies for training private tuck shop conveners in the development of a healthy tuck shop menu. Furthermore, school administrators, parents and the government can leverage participants' suggestions on better ways of incorporating nutritional programs, physical education and overall health strategies that promote the effective prevention of adolescent obesity. Future research from diverse settings and involving the views of students and parents are warranted to substantiate the findings from this study. Further research is also necessary to aid the development of effective educational interventions. --- Conclusions Despite being a prevailing public health concern that needs to be addressed, adolescent obesity seems to be overlooked as compared to other health problems such as mental health. There are many factors that are at play in dealing with adolescent obesity. The barriers encountered at different stakeholder levels need to be specifically addressed. A tripartite collaboration between all stakeholders is key to effectively addressing adolescent obesity. Practical strategies focusing on nutrition, physical activity and overall health can be employed to improve health outcomes for adolescents. Collaborative stakeholder engagement could include parental education on health, formal health talks in schools by health professionals and better-targeted government funding on advertisements encouraging healthy lifestyle choices. These strategies are instrumental to complementing efforts already made by the school despite its current challenges, which include grappling with a crowded curriculum and limited funding for health promotion interventions against adolescent obesity. --- Data Availability Statement: The dataset supporting the findings in this study are included within the article. --- Institutional Review Board Statement: JCU HREC ethical approval was granted for this study . Subsequently, electronic informed and verbal consent for both the quantitative and qualitative phases were sought from each participant prior to commencing the study. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Pseudonyms were used to protect the identity of the participants. --- Appendix A. Interview Questions --- 1. Can you start by briefly elaborating on the role your school plays in the prevention of adolescent obesity? 2. What are some of the general barriers experienced by your school in preventing adolescent obesity? 3. Explain briefly your perception of factors that enable schools in promoting the prevention of adolescent obesity. --- 4. How can you as a school stakeholder be involved in preventing adolescent obesity? 5. Do you think schools are doing enough to prevent adolescent obesity? Explain further. --- 6. What are the adolescent obesity prevention interventions and policies developed by your school? 7. How did your school develop these policies and interventions? 8. How effective are the interventions and policies in your school? Elaborate further. 9. Are all student year levels taking part in interventions or is participation optional? Can you discuss ways you think participation can be maximized? 10. On a scale of 1-10 , what is the implementation rate of these policies/interventions at your school? Please justify your rating. 11. Whose responsibility is it to ensure that the interventions and policies in place are implemented and why? 12. Are there policies governing the school tuck shop menu? Please elaborate. 13. What are your recommendations for preventing adolescent obesity?
Adolescent obesity is a complex multifactorial disease with a combination of environmental, behavioral, psychosocial, biological, cultural and genetic determinants. It remains a global public health issue that presents a major challenge to chronic disease prevention and health into adulthood. Schools have a rich opportunity to improve youth health and tackle obesity, yet they face barriers in fulfilling this function. This study investigated school stakeholders' beliefs and perceptions of the barriers and enablers currently experienced by schools, as well as their recommendations towards preventing adolescent obesity. A sequential explanatory mixed-methods study design was utilised with surveys administered for the quantitative phase and individual interviews for the qualitative phase. Descriptive statistics and inductive thematic analyses were utilised for the survey and interview data, respectively. Triangulation of findings from the quantitative and qualitative phases aided in the better understanding and integration of the overall results. In total, 60 school stakeholders (52 subject teachers, 3 senior teachers and 5 heads of department) from both independent and public high schools in Queensland, Australia responded to the survey, while 14 respondents participated in the interviews. The main perceived causes of obesity were poor eating habits and sedentary lifestyle. Highlighted barriers were busy timetables, shortage of trained staff and funding, lack of robustness in the introduction and implementation of school interventions and insufficient motivation of learners to participate in obesity prevention programs. Enabling factors included parental support, easy access to fitness equipment during recess, supportive government policies, provision of healthier school tuck shop menu options and elimination of sugary drinks from vending machines. A model for the prevention of adolescent obesity was developed based on participants' perceptions. Tripartite collaboration between the school, government and parents was perceived as fundamental to preventing adolescent obesity. Strategies targeting nutrition, physical activity and overall health, including parental education on health, formal health talks in schools by health professionals and better-targeted advertisement encouraging healthy lifestyle choices, were identified as essential for improved adolescent health outcomes.
Background Hip fractures have a major impact on the health care system in the USA with an estimated incidence of 340,000 fractures annually [1]. The annual economic burden of managing hip fractures was estimated at $17-20 billion in 2010 [1,2]. As people are expected to live longer, hip fractures will become more common. It is estimated that by the year 2050 worldwide, there will be an estimated 6.3 million hip fractures worldwide [3]. Hip fractures are a serious and life-changing event for an older person. Often after an initial hip fracture, a person cannot continue living independently and must undergo drastic lifestyle changes [4,5]. In addition, there is an association between hip fractures and an increase in mortality. One year mortality rate after a hip fracture is estimated between 17 and 27 % [6][7][8][9]. The study has two primary hypotheses: first, that there are population factor variations in hip fracture incidence and, second, that there are systematic variations in mortality after hip fracture within the California population. A secondary hypothesis is that there has been a change in incidence trends over time from 2000 to 2011. Several factors may be associated with hip fracture incidence such as age, gender, and race/ethnicity [1,10]. California is a diverse state and provides an opportunity to examine these factors in relation to incidence of hip fracture and mortality following a hip fracture. The goal was to explore the effects of gender, age, and race/ ethnicity, with regard to the incidence of hip fracture and 30-, 90-, and 365-day mortality rates in California from 2000 to 2009. In the recent era of increasing health care quality measurement, these population data and analysis will be helpful in the interpretation of patient hip fracture incidence and mortality outcomes [11]. --- Methods This study was a population-based epidemiological review of all California Office of Statewide Health and Planning and Development non-federal hospital admissions for hip fractures from 2000 to 2011. Mortality data was extracted from the California State Death Statistical Master File records. Participants were assigned a linkage number, similar to a de-identified social security number, and data from initial hospital admission OSHPD records were linked to DSMF records, if applicable . This linking method has previously been explained in detail [12]. Participants were any patient 55 years and older admitted with the primary International Classification of Disease, 9th Revision procedure code for the treatment of hip fractures. These include partial hip arthroplasty , internal fixation of bone without fracture reduction , closed reduction of fracture with internal fixation , open reduction of fracture with internal fixation, femur , and open reduction of separated epiphysis ; see comments in study limitation section with regard to . There were a total of 317,677 hospital admissions for hip fractures over the 12-year span and 24,899 deaths following hip fractures . All participants without linkage numbers were excluded from mortality rate calculations. To evaluate the differences in the incidence of hip fracture, the outcome variable was incidence rate. To evaluate the difference in mortality, the outcome variables were 30-, 90-, and 365-day mortality. Hip fracture incidence rates were calculated based on hospital admission; a patient with multiple hip fractures could be counted twice toward hip fracture incidence rates. Both incidences of hip fracture and mortality were evaluated by gender, age, race/ethnicity, and geographical area. Gender had two levels, male and female. Females were used as a reference group. Age was categorized in seven levels: 55-59, 60-64, 65-69, 70-74, 75-79, 80-84, and 85+. The 65-69 age group was used as the reference group because this is the first group that is eligible for Medicare. Race/ethnicity had six levels: Caucasian, African American, Hispanic, Asian, Native American, and other. Caucasians were used as a reference group because they were the largest racial/ethnic subgroup . Rates were calculated by first calculating the count of hip fracture incidence in a relevant category. All analyses were weighted based on California population statistics, as described by the 2000 and 2010 US Census reports [13,14]. For all years between 2000 and 2010, census distributions were interpolated. In addition, all age rates were weighted by the proportion of each relevant age group in the population. This prevented rates from being inflated by high rates in groups with low absolute counts . Differences in the incidence and mortality rate of hip fractures, based on various subgroups, were evaluated using Poisson regression models. Odds ratios are provided with 95 % confidence intervals . All analyses were conducted in SPSS version 22 , Microsoft Excel , and R . Missing data were minimal for all analyses; cases with missing data were excluded from relevant analyses. Less than 3 % of the sample were missing a social security number and had to be excluded from death rate calculations. All participants without gender information were excluded as all analyses were split by gender. All participants with gender information also had age information. In addition, 1.6 % of people had missing race/ethnicity data and were excluded from those analyses. --- Results --- Hip fracture incidence All results were weighted using population rates from the US Census Bureau [13,14] and are presented in Table 2. Hip fracture rates decreased over time . Males were found to have a lower incidence of hip fractures than females . As gender differences were dramatic, all subsequent analyses were performed on males and females separately. As a person ages, they are more likely to sustain a hip fracture; females 85 years and above were 18.73 times more likely to sustain a hip fracture than those aged 65-69 . The relationship in males was even more dramatic, those 85 and above were 32.79 times more likely to sustain a hip fracture than the reference group . Caucasians had the highest incidence of hip fracture across all race/ethnicity groups; Native Americans had the lowest rates in reference to Caucasians . --- Discussion --- Discussion of hip fracture incidence There are multiple individual risks for the occurrence of a hip fracture including but not limited to osteoporosis, smoking, general health status, medical co-morbidities, exercise, and socio-economic status [1,15]. In this study, we found that Caucasian females, aged 85+ were at the most risk for a hip fracture. Figure 3 graphically illustrates the significant difference in hip fracture incidence with men having an incidence about half that of women. This study reports results consistent with similar studies done with Medicare data, European population studies, and previous studies on California data [1,4,16,17]. When evaluating programs designed to reduce the incidence of hip fractures the decreasing incidence must be factored into any program evaluation [16]. Figures 4 and 5 graphically illustrates that hip fracture incidence for men and women over the age of 70 declined over the study time period. These declines are an extension of the trend noted by Brauer et al. [1]. They suggest an improved trend for the overall California population in line with the Swiss population [18]. This improved trend may be partially explained by widespread prescription of bisphosphonate medications but improved population health, decreasing incidence of tobacco use and public health promotion of increased activity and healthy lifestyles are also possible contributors to this trend [9,19,20]. Kannus et al. [21] in agreement with this statement speculate that the biological basis for this declining rate is multifactorial. Figures 6 and 7 graphically illustrate that white men and women have a significantly higher incidence of hip fractures compared to the remainder of the population. The decline in incidence rate over the study time period has been more significant for white men and women. Figure 6 is an interesting and consistent extension of Fig. 1a published in Zingmond et al. [8] for the period 1983-1998. The ethnic disparities are consistent with the data of Silverman and Madison from 1983-1984 [5]. When compared to Kanis et al., the data of the white population has an incidence rate similar to Sweden, Norway, Austria, and Ireland [22]. The remainder of the population has an incidence rate consistent with Spain, Mexico, and Chile, which are similar to a review done Fig. 5 Male hip fracture incidence rates over time, by age group Fig. 6 Female hip fracture incidence rates over time, by race/ethnicity by Cheng et al. [23]. Wright et al. in a Medicare sample population found in a similar time period that incidence of Hispanic hip fractures had not declined, similar to our study [2]. Investigations of the race/ethnicity factor for hip fractures has been further confused by the changing demographics of the US population. --- Discussion of hip fracture mortality There are multiple individual risks for the occurrence of mortality following a hip fracture including but not limited to body pre-operative functional status, pre-operative cognitive status, congestive heart failure, general health status, diabetes, other medical comorbidities, and occurrence of post-operative patient complications [24,25]. In this study, we found that Caucasian males aged 85+ was the profile for patients at the most risk for a hip fracture mortality. That males were more likely than their female counterparts to die following a hip fracture was particularly interesting because females had the highest hip fracture incidence rates. The increase in mortality risk associated with increasing age and male gender has been widely noted in other studies of this topic [7,[26][27][28]. Figure 8 graphically illustrates that all mortality rates decreased over the study time period. This is a resumption of a trend to decreased mortality that Brauer et al. noted had stalled in about 1998 [1]. Figures 9 and 10 graphically illustrate that the incidence of 30-day mortality varies by decade of life for both men and women. These gender-and age-specific baseline mortality rates will be helpful to risk adjust the incidence of mortality after hip fracture care. Mortality rates are being introduced in the USA and elsewhere as a quality measure for hip fracture care [11]. There has been an increase in the use of systematic interventions including co-operative care between surgeons and medical practitioners, attention to pain management, delirium prevention, early surgery, and aggressive mobilization [29][30][31][32]. Penrod et al. in a study of approximately 3000 patients from 1997-1999 found that white patients enjoyed a mortality risk advantage compared to the rest of the study population [25]. Our study was based on a much larger and more comprehensive study population. Sterling documented a gap in the literature with regard to racial and ethnic differences in the survival of US hip fracture patients [33]. A more current literature search has failed to find current studies on this topic. One older citation was based on a very different racial/ethnic population compared to 2009 [3]. --- Limitations One limitation was that we selected our dataset based on the principal procedure code only. Other procedure codes are recorded, and so there is a possibility that we Fig. 8 30-, 90-, and 365-day mortality rates over time Fig. 9 Female 30-day mortality rates over time, by age missed some patients who had a hip fracture that was not coded as their principal procedure. This is ICD-9 data and laterality is not a data element. Although the patient data record contains information regarding the hospital where the surgery occurred, there is no data with regard to transfers of patients for admission, subsequent re-operations at a second hospital for the same fracture, nor data with regard to the individual attending surgeon. These factors would all be useful information for the analysis [34,35]. Another limitation is the inadvertent inclusion of a small group of patients who had a principal procedure code of 79.55 and were older than age 55. Since this is a pediatric orthopedic procedure, this combination of principal procedure code and age is likely the result of incorrect coding at the hospital level. However, it is unknown if the incorrect coding was in the age of the patient or the principal procedure code. The number of patients is small , data analysis was not re-done, and inclusion of this subset should not significantly affect the results. The last important limitation of using the OSHPD dataset was the lack of important clinical risk data [1,15]. Due to the nature of the dataset, there was no way to analyze any of these factors and therefore we must accept this limitation of applying these results to health effectiveness research. --- Conclusion This California state-wide population-based study of a large and diverse population shows a significant reduction in hip fracture incidence over the study period of 2000-2011 and a corresponding reduction in mortality over the study period of 2000-2009. There are significant gender, age, race/ethnicity disparities for both hip fracture incidence and mortality in subpopulations that will allow for targeted population interventions and opportunities for further research. Further, these data will provide baseline information to assess and risk stratify outcomes and interventions. --- Ethics approval IRB approval was received from Community Medical Center and the State of California's Committee for the Protection of Human Subjects . --- Competing interests The authors declare that they have no competing interests. Authors' contributions KS provided oversight on all statistical analysis and results write up and is the lead author of the manuscript. LH coordinated the project, literature review, manuscript preparation, and submission. MA assisted with data cleaning and statistical analysis. WTB provided oversight on the entire project, literature review, and manuscript preparation. All authors read and approved the final manuscript. --- Funding No external funding was received in support of this investigation.
Background: Hip fractures result in both health and cost burdens from a public health perspective and have a major impact on the health care system in the USA. The purpose was to examine whether there were systematic differences in hip fracture incidence and 30-, 90-, and 365-day mortality after hip fracture in the California population as a function of age, gender, and race/ethnicity from 2000-2011. Methods: This was a population-based study from 2000 to 2011 using data from the California Office of Statewide Health and Planning and Development (OSHPD, N = 317,677), California State Death Statistical Master File records (N = 224,899), and the US Census 2000 and 2010. There were a total of 317,677 hospital admissions for hip fractures over the 12-year span and 24,899 deaths following hip fractures. All participants without linkage (substituted for social security) numbers were excluded from mortality rate calculations. Variation in incidence and mortality rates across time, gender, race/ethnicity, and age were assessed using Poisson regression models. Odds ratio and 95 % confidence intervals are provided. Results: The incidence rate of hip fractures decreased between 2000 and 2011 (odds ratio (OR) = 0.98, 95 % confidence interval (CI) 0.98, 0.98). Mortality rates also decreased over time. There were gender, race/ethnicity, and age group differences in both incidence and mortality rates. Conclusions: Males were half as likely to sustain a hip fracture, but their mortality within a year of the procedure is almost twice the rate than women. As age increased, the prevalence of hip fracture increased dramatically, but mortality did not increase as steeply. Caucasians were more likely to sustain a hip fracture and to die within 1 year after a hip fracture. The disparities in subpopulations will allow for targeted population interventions and opportunities for further research.
I. IntroductIon Women and men should have the same power to shape society as well as their own lives. Indian women's exposure to educational opportunities is substantially higher than it was some decades ago, especially in the urban setting. It has opened new vistas, increased awareness and raised aspirations of personal growth. This, along with economic pressure, has been instrumental in influencing women's decision to enter the work force. Indian women belonging to all classes have entered into paid occupations. At present India is power in the global economy because of the talented educated Indian women. Women have started recognizing their innate talents and skills and working to achieve the excellence in their interest areas. In the existing set-up where the primary responsibility of women is to maintain the household activities, women are overburdened and that generate a throw into turmoil in their work and life roles. Balancing the work and non work life of women professionals is must for organisational sustainable development. Historically, women's employment participation has been more in the area of service sector. Females with high levels of academic qualifications are also finding it difficult to make balance between professional life and private life. It is important for every organization to take necessary steps to maintain a healthy balance between work and their private lives so that both employees and the organisations can be benefited in the long term. Work -Life Balance is a challenging issue for leaders, managers and has also attracted the attention of researchers. Work life balance, in its broadest sense, is defined as a satisfactory level of involvement or 'fit' between the multiple roles in a person's life. In this climate managing the boundary between home and work is becoming more challenging. Organizations not providing real opportunity for employees work life balance are opening themselves up to increasing numbers of dissatisfied and unproductive employees and hence increased attrition rates. Further there is a need for employers and employees alike to find flexible and innovative solutions that maximize productivity without damaging employees well -being, their family relationships and other aspects of life. --- II. revIew of LIterature Work-life balance has important consequences for employee attitudes towards their organisations as well as for the lives of employees . Guest Nevertheless, we need to understand the definition underlying work-life balance concepts. Defining the concept of WLB is a complex task, as it can be viewed from the meaning of 'work', 'life' and 'balance' . Dundas argues that work-life balance is about effectively managing the juggling act between paid work and all other activities that are important to people such as family, community activities, voluntary work, personal development and leisure and recreation. Greenhaus, Collins and Shaw define worklife balance as the extent to which an individual is equally engaged in -and equally satisfied with -his or her work role and family role. Thus, employees who experience high work-life balance are those who exhibit similar investment of time and commitment, to work and non-work domains . Previously, the female workforce in India was mainly employed in non-managerial, subordinate or low-profile positions. Now, they occupy almost all categories of positions in the workplace. These changes in work culture have added to women's duties and responsibilities to their family as well as to society . Despite this newfound work culture, and even though more and more women are joining the workforce, women in managerial roles are limited. The probable reason for this phenomenon is the conflicts between competing work demands and personal and family needs. Bilal, Muhammad, Zia-ur-Rehman, Muhammad and Raza, Irfan. , says that Work life conflict has a damaging effect on job satisfaction, organizational commitment, productivity, turnover and absenteeism. On an individual level, work-life conflict is associated with employee burnout, mental health issues, substance abuse, and diminished family functioning. Higher education institutions would appear to offer certain positives for combining career and family life. However, results of a recent survey found that academic staff reported feeling highly stressed because of increased teaching loads and staff/student ratios, pressure to attract external funds, and lack of recognition and reward . As evidenced by the current study's findings, the stress to balance between work and life among the higher education institutions would in turn affects their occupational attitudes such as job satisfaction, commitment and intention to leave and further more leads to actual turnover. The demands of economic globalization, escalating competition and reduced government funding have affected higher education sector and have led to many higher education institutions adopting market-driven principles in relation to their workplace practices and policies ; Mohd Noor & Amat ; Mohd Noor, Stanton and Young ). For academic staff, this has meant elevated workloads, higher expectations concerning research and increased administrative tasks, at the same time as general staff have struggled with diminished resources and changing work processes . --- III. need for the Study Over the past two decades the issues of work -family and work -life balance have received significant attention from employers, workers, politicians, academia and the media. The move towards global competition has increased pressure on organizations and individual employees alike to be more flexible and responsive to change. The employee who is able to maintain balance between private and professional life can contribute more to success of the organization. However, it is in the context of current skill shortages and the prospect of an ageing workforce that it is now imperative for organizations to embrace work life balance practices to attract and retain talent, not only from traditional sources but also from untapped and diverse social groups. Future commercial sustainability, organizations need to ensure they are not just encourage but mandate a practical and workable work life balance policy, benefiting and meeting the needs of both the organization and its employees. And importantly, organizations not providing real opportunity for employee work life balance are opening themselves up to increasing numbers of dissatisfied and unproductive employees and hence increased attrition rates. In this climate managing the boundary between home and work is becoming more challenging. Keeping all developments in view the researchers have taken up research on 'Work Life Balance of Women Working in Higher Educational Institutes in Guntur District of Andhra Pradesh'. Guntur district is the fourth largest district in Andhra Pradesh consists about 85 higher educational institutions, including minority institutions. --- Work Life Balance of Women Working in Higher Educational Institutions in Guntur District of Andhra Pradesh There are about 2100 women employees are working with the institutes in rendering educational services. --- IV. objectIveS of the Study 1. To study the socio-economic profile of the respondents; 2. To examine the factors affecting work-life balance of women employees in higher educational institutions; 3. To analyze the effect of work life balance on women's performance and work attitude; 4. To put forth certain conclusions based on the findings that have been arrived. --- V. reSearch MethodoLogy To fulfil afore said objectives, the data were collected from both primary sources as well as secondary sources. The secondary data were collected from the various journals, books, periodicals and web. The primary data were collected with support of well designed structured questionnaire. Sampling technique which is used in this study is Convenience sampling and the sample size is limited to 80. --- VI. data anaLySIS and dIScuSSIonS Table I depicts that 41% are belongs to the age group of 46-55 years, and 60% are having professional qualification, and 81% are married, 36% are in Associate Professor position, and mostly 34% people are drawing salary more than Rs. 45,000/-per month , and nearly 46% of women are working in urban areas, and 50% of the respondents are having the periodicity of 5 Years and above in the current position, and nearly 30% of the respondents are having 16-20 years of total experience in their fields. From above Table it is clear that, the major portion of the variables of work interference with personal life against the dependent variable of age is highly significant at 0.05 levels. Where the calculated value of "F", is above the table value for the variables my personal life suffers because of work, My job makes personal life difficult, I neglect personal needs because of work, I put personal life on hold for work, and I am happy with the amount of time for non-work activities are significant. Only one variable i.e. I struggle to juggle work and non-work is not a significant variable from the study. From above table it is clear that, the major portion of the variables of work interference with personal life against the dependent variable of total work experience is highly significant at 0.05 levels. Where the calculated value of "F", is above the table value for the variables I neglect personal needs because of work, I struggle to juggle work and nonwork and I am happy with the amount of time for non-work activities are significant and the remaining variables are not a significant variables. The mean scores computed in the above Table are based on weighted average method. Among all variables, my personal life suffers because of work has got highest mean value of 4.43 and standard deviation of 1.111.Many respondents strongly feel that they are missing their personal life because of their work schedule. A significant and strong correlation was found for My job makes personal life difficult with My personal life suffers because of work and I struggle to juggle work and non-work with I neglect personal needs because of work i.e., most of the respondents are not satisfied with their work life and personal life. From above Table it is clear that, the major portion of the variables of Factors impacting work -life balance against the dependent variable of age is highly significant at 0.05 levels. Where the calculated value of "F", is above the table value for the variables I feel exhausted at the end of days work, My family supports me in my professional life, My organization recognize the importance of my personal life are significant. The other variables i.e. Lack of work-life balance has had an adverse impact on my career and My colleagues have resigned or taken a career break because of work-life balance issues in the last one year are not significant. --- Work Life Balance of Women Working in Higher Educational Institutions in Guntur District of Andhra Pradesh From above table it is clear that, the major portion of the variables of factors impacting work -life balance against the dependent variable of total work experience is highly significant at 0.05 level. Where the calculated value of "F", is above the table value for the variables My family supports me in my professional life, My organization recognize the importance of my personal life and Lack of work-life balance has an adverse impact on my career are significant and the remaining variables are not a significant variables. The mean scores computed in the above Table are based on weighted average method. Among all variables, My colleagues have resigned or taken a career break because of work-life balance issues in the last one year has got highest mean value of 3.54 and standard deviation of 0.967. A significant and strong correlation was found for Lack of work-life balance has an adverse impact on my career with My family supports me in my professional life i.e., most of the respondents opinion that to get better results and to balance the work life in their professional life and as well as in personal life they need their organisation and family support. From above Table it is clear that, the major portion of the variables of Work related factors interfering with personal life against the dependent variable of age are highly significant at 0.05 level. Where the calculated value of "F", is above the table value for the variables Work on your days off , Carry a cell phone or pager for work so you can be reached after normal working hours, Stay at work after normal working hours, Travel whenever the organization asks you to, even though technically you don't have to, Rearrange, alter or cancel personal plans because of work, and Participate in community activities are significant. The other variables i.e. Take work-related phone calls at home, Work during vacations ,and Check back with the office even when you are on vacation are not significant. From above table it is clear that, the major portion of the variables of Work related factors interfering with personal life against the dependent variable of total work experience are highly significant at 0.01 level. Where the calculated value of "F", is above the table value for the variables Work during vacations, Rearrange, alter or cancel personal plans because of work, and Participate in community activities are significant and the remaining variables are not a significant variables. The mean scores computed in the above Table are based on weighted average method. Among all variables, Participate in community activities has got highest mean value of 3.69 and standard deviation of 1.176. A significant and strong correlation was found for Participate in community activities with Check back with the office even when you are on vacation i.e., most of the respondents opinion that they are participating in community activities to maintain good relations with others and to get better results in their work and they are satisfied with the provision of Check back with the office even when they are on vacation. And also a significant and strong correlation was found for Participate in community activities with Work during vacations . From above Table it is clear that, the major portion of the variables of level of risk inherent in each program, on an employee's career against the dependent variable of age are highly significant at 0.05 levels. Where the calculated value of "F", is above the table value for the variables Flexible work schedules, Part-time work, Brief months paid sabbatical, and Paid maternity leave are significant. The other variables i.e. Job sharing, paid leave for sick family members, and career breaks are not significant. From above table it is clear that, the major portion of the variables of Level of risk inherent in each program, on an employee's career against the dependent variable of total work experience are highly significant at 0.05 level. Where the calculated value of "F", is above the table value for the variables Part-time work, Job sharing, and career breaks are significant and the remaining variables are not a significant variables. The mean scores computed in the above Table are based on weighted average method. Among all variables, paid leave for sick family members has got highest mean value of 2.10 and standard deviation of 0.518. A significant and strong correlation was found for career breaks with Brief months paid sabbatical . And also there is a strong correlation between two variables Job sharing with Part-time work and Paid maternity leave with Brief months paid sabbatical . The respondents are satisfied with the months paid sabbatical during the career breaks and maternity leave. Most of the respondents are satisfied about job sharing and part-time work. --- vII. fIndIngS of the Study 1. Most of the respondents' opinion that their personal life suffers because of work and also their job makes personal life difficult and they feel that they are neglecting their personal life; 2. Respondents are happier with the amount of time they had for non-work activities; 3. Many respondents say that their family supports in professional life and as well as their organisations are also recognize the importance of their personal life; 9. The organisations are providing 1-2 months paid sabbatical for those who gone for maternity leave and for career breaks i.e., for higher education. --- VIII.concLuSIon Service sector is one of the fastest growing sector it provides opportunity for employment of more number of women. The secret to work-life balance will depend on field of work, family structure and financial position. Personal life and professional work are two sides of coin it is difficult to separate and form a source of conflict. Organisations must strive to develop a special bond with its people, so that they will put in more into their jobs and contribute positively. Work life and personal life are the two sides of the same coin. Professionals have to make tough choices even when their work and personal life is nowhere close to equilibrium. Achieving "work life balance" life is not simple as it sounds specially for working women. However organizations efforts with family support can make women balance personal front with professional work. Organisations need to create congenial conditions in which employees can balance work with their personal needs and desires. Successfully achieving work-life balance depends not only on organizations but similar efforts from family are also desirable. Emotional intelligence is required in order to accomplish day-to-day objectives of life which is a challenge to everyone to achieve it. It is the key to achieve the desired balance between work and life, which ultimately leads to success in the professional as well as personal life. --- referenceS
Indian families are undergoing rapid changes due to the increased pace of urbanization modernization and sociocultural environment. Indian women belonging to all classes have entered into paid occupations. Demographic and societal changes, globalization and advances in technology are forcing businesses to transform the way they operate. It has opened new vistas, increased awareness and raised aspirations of personal growth. This, along with economic pressure, has been instrumental in influencing women's decision to enter the work force. Apart from that, the growing needs such as caring for children and aging family members demand for a dual income household, or increased healthcare costs also influencing to enter into workforce. These all leads in the existing family and societal setup, working women are overburdened and find it increasingly difficult to balance their work and life roles. At the present time, Indian women's exposure to educational opportunities is substantially higher than it was some decades ago, especially in the urban settings, which provide opportunities to enter into work for sharing family burdens. Work -Life Balance of women is a challenging issue for leaders, managers and has also attracted the attention of researchers. Work life balance, in its broadest sense, is defined as a satisfactory level of involvement or 'fit' between the multiple roles in a person's life. The study of work life balance involves the examination of people's ability to manage simultaneously the multi-faceted demands of life. Keeping this in view we have selected a study to analyze the work life balance of women working in higher educational institutes in Guntur district of Andra Pradesh.
Discovering Economic History in Footnotes: The Story of the Tong Taisheng Merchant Archive The recent Great Divergence debate spurred by the provocative claim that living standards in 18 th century China -at least in the advanced region of the Lower Yangzi -may be comparable to Northwestern Europe as late as the 18 th century has promoted a flurry of new research re-examining China's price and wage history in comparative perspective. 1 The debate, however, has also brought to the fore serious deficiencies in surviving Chinese historical statistics. Reviewing the existing evidence, Allen et al. points out that the claims of a higher living standard in 18 th century China "relied on indirect comparisons based on scattered output, consumption, or demographic data"; in contrast, "our knowledge of real incomes in Europe is broad and deep because since the mid-nineteenth century scholars have been compiling databases of wages and prices for European cities from the late middle ages into the nineteenth century when official statistics begin" . 2 The nature of Chinese historical statistics itself raises a critical question germane to the core of the debate: could the paucity of statistics itself be a result of poor record keeping in historical China -which itself may be a reflection of the nature of her economy and societyor more a reflection of the poor state of academic scholarship and archival collection in China's subsequent tumultuous modern history? Can one surmise that the richly endowed Western historical statistics preserved from former times are themselves testimonials to the high level of economic development or even rationality in the West historically? 3 Given the critical importance of historical statistics in the current Great Divergence debate, it is surprising that the historiographical dimension of data issues has so far received sparing attention. In this article, we illustrate this thesis through our unique encounterduring the last seven years of our research -with the merchant account books of Tong Taisheng 统泰升 and our rediscovery of the original owner or donor. The TTS archive -consisting of over 400 volumes for a single store -contains detailed records of actual market transactions not just in grain but mostly non-grain commodities and also includes local copper cash/silver exchange rates from a largely unknown North China village township in Ningjin 宁津 county of Shandong province in 1800-1850 -a period before China's forced opening to the West. The original TTS record had been used once by a group of eminent Chinese economic historians in the 1950s but has lied largely incognito since. This article represents the first of our series of systematic efforts to reconstruct, both thorough statistics and a historical narrative, the history of the TTS archive, the TTS firm, Ningjin county, and the larger North China economy on the eve of the Opium War. The focus of our current work is on the archive itself and the people connected with it from the initial donation, to preservation and to our rediscovery. As you will see, the history of the TTS archive and the story of the individuals involved is itself a miniature history of modern China, of tradition-bound elites and new generation of modern intellectuals getting caught up and muddling through one and half century of ideological and political vicissitudes. It raises some important epistemological question on the nature of historical evidence and statistical records on Chinese economic history. --- I. The TTS Archive In a widely used statistical manual for Chinese economic history compiled in 1955 by Professor Yan Zhongping and ten other eminent economic historians, two tables and a figure are included that provide relatively continuous annual series of copper cash/silver exchange rates and two price indices for agricultural and handicraft goods respectively for the period of 1798-1850. These three pages of highly condensed statistical series stand out as a glaring anomaly in the dark alley of Chinese historical statistics. Despite the brevity of the explanation, they have not escaped the attention of researchers: the Ningjin series appeared frequently in some of the most influential works on China's pre modern monetary sector and often served as the key systematic data series for evaluating China's balance of payment crisis caused by silver outflow, leading eventually to the fateful Opium War of 1842 -a watershed event in modern Chinese history . Embedded in the footnotes to these two tables are brief explanations of the statistical methodology of constructing the exchange rate series and the number of items included in the construction of these price indices. They also indicated that the original data were extracted from a grocery store called Tong Taisheng, located in the town of Daliu 大柳镇 of Ningjin county in the northern part of Zhili province . The footnotes mentioned that the original TTS archives were housed mostly in the National Library and a small segment in the library of the Institute of Economic Research of the Chinese Academy of Social Sciences in Beijing. In 2005, we keyed in -just on the off chance -the TTS merchant accounts in the online catalogue of the National Library in Beijing and, to our complete disbelief, the title just popped up on the screen. Eventually, our archival compilation in both the National Library and Institute library of the Chinese Academy of Social Science turned up 437 volumes of these account books for the period of 1798-1850. Like all traditional merchant account books, physically the books are light in weight with paper bindings , approximately 20cm square and approximately 3-4 cm thick. They are string-bound and handwritten with a classical brush pen. Pages are not numbered or indexed . Number of pages and records vary in different account books. In the Appendix, we present two photo pages of the account book. The first is an image of the cover the of account book. The second is the image of an account page with actual records. Table 1 provides a breakdown of all the volumes by decade. There are also detailed prices of about 40 or 50 types of commodities with similar degrees of detail. We are confident that careful research based on a systematic exploitation of this rich and high-quality data set could offer new insights on critical debates on Chinese economic history and global history. For example, the relatively complete and integrated nature of the TTS account allows an in-depth, primary source based study of the pre-modern Chinese accounting system . The consistent and high quality time series of copper-silver exchange rates that can be reconstructed from the TTS can offer important clues to our understanding of the traditional Chinese monetary system and the impact of opium trade and the silver outflow on the Chinese economy during this period. Finally, the systematic information on the volumes and frequencies of transactions at annual, monthly and daily frequencies can for the first time quantify the landmark study on traditional Chinese marketing structure by William Skinner transcribing and interpreting the account book material requires specialized learning and expertise on the part of researchers. 6 On the other hand, the challenge to decipher these files also affords unusual insights into the internal logic and mechanism underlying the pre-modern Chinese market, business organization, monetary system, accounting methods or even social customs. 7 Where does this archive come from? Who was the owner of this archive? Why was so little information divulged about this archive? Why was this record preserved in such an exceptionally good and well-ordered condition? How did a pile of archives mentioned in some footnotes in 1955 survive through decades of political turmoil in Mao's era? --- II. The Re-discovery In April, 2008, we visited Ningjin county and the towns of Daliu, Changwan and Chaihu . Daliu, where the TTS firm was located, was a small market town in Ningjin county, currently a county of Dezhoushi prefecture in Shandong province. It was about 240 kilometers south of Beijing, close to the border of Hebei province east of the historical Grand Canal. With the massive building of rural highway infrastructure during the past two decades, commercial activities in these towns have largely shifted out of the traditional town center, called the "old street" 老街, in Daliu towards a spattering of stores and restaurants along a rural highway, modern, dusty and homogenous. What remained alongside the original "old street" were clusters of residences interspersed with a few shops, postal offices and governmental buildings built or rebuilt largely during the Mao era. For the few locals with whom we conversed, the "old street" evoked tales of the 1950s rather than the 1850s. Our visit to the Ningjin county archival office turned up nothing on TTS. We located the Ningjin county gazetteer that dated back to the reigns of the Kangxi 8, 1935. Wan offers the following critical passage on the source of this archive: "While the Peiping library had long intended to collect the account book materials, it was prevented from doing so due to its busy engagement in other priorities. Last winter, suddenly the Library received a letter from Mr. Rong Mengyuan from Daliu town, Ningjin county. Mr. Rong indicated his willingness to offer his collection of old account books to the Library, which we very much welcome. Mr. Rong noted these account books had information on rural economy and commodity prices. He did not ask for any remuneration except for the shipping cost from Ningjin to Peiping. We are of course grateful for such a hearty donation. ……….. It is reported that the account books arrived in a rather messy condition in two boxes. After a rough compilation by Mr. Zhao Jinghe , we arrive at a total of 145 volumes for the Jiaqing reign , and the final volume goes to the 30 th year of Daoguang , covering a span of more than fifty years. Dating back to more than 130 years from now, these account books are indeed a rare find." While most descriptions in both Wei and Wan matched what we have been able to find independently in the extant TTS account books, Wan's tally of all the volumes added up to a total of 468 volumes, more than the 437 volumes we have been able to locate so far. 10 The key man mentioned above, Rong Mengyuan , as it turned out, was no average donor. In the PRC era, Rong was an eminent historian on modern China and an authority on historical archives of the Qing and Republican period. --- III. The Man behind the TTS archive The Rong genealogy was last printed in 1903, compiled from the culmination of six previous editions. It traced the lineage back to as many as 16 generations over a span of 491 years, with editions updated in 1894 , 1880 , 1813 , 1771 , 1756 , 1745 , 1717 and 1719 . The Rong family first migrated from Zhu Cheng 诸城 to Daliu in Ningjin county in 1404 during the early Ming dynasty. Starting as farmers, the lineage amassed a certain amount of wealth through diligence and thrift and began to engage in a money lending business as well as some charity activities in the local town by the third generation . By its sixth generation , the Rongs claimed to have accumulated over 300 mu of land. After some set-backs in family wealth partly owing to a series of bitter legal disputes over financial matters with another lineage , the seventh and eighth generation made a comeback through commerce. 12 Like generations of successful merchant lineages in traditional China , the Rongs turned to invest in the education of their offspring to enter the highly competitive ranks of the civil service examination System, a critical step up the ladder in the Chinese political and social hierarchy. The efforts seemed to pay off as the genealogy reported steady progress with members attaining low level degree of shengyuan and from the ninth generation on, making successive entries in the ranks of the official examination system. Meanwhile, family wealth and business clearly stabilized with the rise in social and political status secured by these examination achievements. Moving into the 19 th century -the period recorded in the TTS account -the Rong lineage wealth may have peaked as the 12 th and 13 th generation added newly purchased land of 800 and 300 mu to the family wealth. The Rongs were clearly the elite of the town as a member of the 13 th generation was the trusted person in town who would be called upon to mediate and resolve village disputes. The prosperity of the Rongs continued beyond the mid-19 th century, the period in which the extant TTS archive ends. 13 As we were informed by Rong Weimu, the Rongs in the early 20 th century allegedly owned nearly half of the houses in Daliu town. Besides the retail business, they operated a few cottage workshops in flour milling, vinegar processing, and textile handicraft and also managed some agricultural cultivation largely based on the use of long and short-term laborers. Like elites in traditional China, the Rongs' route to wealth and power was secured through generations of mercantile thrift and land acquisition and legitimized through their entries into the national civil service examination. 14 17 Given his political mishap, Rong's intellectual focus on historical archives, which is presumably more factual or "objective" than ideological, seemed like a viable career strategy. But how did one of China's most eminent archivists remain completely unassociated with the set of his family accounts that he himself had earnestly offered up in the 1930s? Rong Mengyuan died in 1985, leaving no trace or mention of the TTS archive in his own voluminous works on historical archives. Neither was his family, according to Rong Weimu, aware of the TTS archive . As none of the eleven authors in the Yan statistics volume currently survive, we cannot determine for sure if Yan and his colleagues' reticence on the Rong origin of the TTS accounts was simply due to sheer neglect or something else. 18 We believe, however, some light could be shed on this mystery by looking at the change in the political climate and its effect on scholarship between the 1930s and the 1960s. --- IV. The Rise and Fall of an Archive Although the initial introduction of the Marxist framework of modes of production and stages of social development into Chinese historiography in the early 20 th century was a relatively open and free intellectual endeavour, the framework itself quickly hardened into a political dogma following the founding of the PRC in 1949. 19 The so-called relations of production and universal stages of social development in the Marxist framework of historical materialism turned into an ideology of class warfare that pitted the so-called oppressed against the oppressors, the exploited against the exploiters -with the former represented by the proletariat, the workers and peasants and the latter by capitalists, the merchants and landlords. As is well known, by classifying people according to "birth origin" , the scheme underpinned massive political persecution such as the anti-rightist campaign in the 1950s and the Cultural Revolution in 1966-1976. 20 Clearly, Rong's privileged mercantile and landlord "birth origin" would do him no good in this scheme. He and his family were officially labelled as a "landlord". 21 Ironically, as if to extricate himself from his inglorious birth origin, Rong published an article in 1955 to attack the "birth origin" of Hu Shi , China's best-known liberal intellectual who left Chinese mainland after the CCP's victory: "How much land did Hu Shi's family own? He himself did not explain, but he did say that every autumn, he will follow his grandmother to the field to supervise harvesting by tenants. Hence, his family indeed is that of a landlord…" "Hu Shi's family has three stores … Judging from his snobbish attitude of late, mercantile ideas must have had a large influence on him… Hu Shi clearly inherited the tradition of a bureaucratic-landlord-merchant family." Only two years later, Rong himself fell victim to the 1957 anti-Rightist campaign. An article published in the People's Daily , the Chinese Communist Party's official mouth piece, denounced the then disgraced "Rightist", Rong Mengyuan: "Rong Mengyuan's anti-party activities have been consistent throughout. Born in a landlord family, he joined the revolutionary cause in 1932, only to betray it at a critical juncture… By concealing his personal counter-revolutionary history, he sneaked back into the Party… He continued with his anti-Party activities in Yan'an in 1941 … only to be expelled…. By the end of 1953, the Party criticized his factionalist anti-party activities within the Research Institute… But in the end, it was to no avail as Mr. Rong remained an inveterate anti-revolutionary and should be condemned as an imposter in the history profession." It is striking to see that Rong Mengyuan's brief respite back to his hometown of Ningjin in 1932-35 -during which he donated the TTS archive to Beijing -was now trumped up as his "betrayal" of the party "at a critical juncture." It is clear that by then that Rong's one-time strategy to seek safe haven in the "relative neutrality" of the archival material was no longer enough to spare him from the political storm . Similarly, the political tension between the identity of the owner or producer of statistics and the nature of the statistics was foremost on the minds of Yan Zhongping and his team when compiling their statistics volume published in 1955. Recounting in 1956 their experience of compiling this volume, Yan remarked that: "Among our comrades, a minority of them believed that since foreign language material was produced by imperialists, they cannot be reliable and should not be accepted as these imperialists were speaking from the stance of aggressors. They should not be used even when no comparable Chinese records existed. This view, however, is narrow-minded. While duly recognizing the aggressive nature of the imperialists, they may still inadvertently divulge their criminal deeds." 22 Yan's seemingly comic defence of the use of non-Chinese language sources was actually no laughing matter then. They were the flicker of sanity on the eve of China's decent into the abyss of the Great Leap Forward when statistics could simply be concocted or fabricated. More importantly, whatever may be the truth, Rong's anonymity and Yan's reticence on the "birth origin" of the TTS archive turned out to be a blessing in disguise. While the TTS archive languished in dust for the next three decades, Rong himself -despite being labeled an outright "Rightist" -and his family, according to Rong Weimu, managed to lie low and undergo only relatively mild phases of persecutions. From the late 1970s, the arrival of a new era under Deng Xiaoping heralded in a gradual but decisive shift away from Maoist radicalism. 23 As part of this reversal, the Deng regime also reined in the class warfare and even sought to re-embrace the once denigrated and persecuted capitalists, "exploiters" and "oppressors". Like countless others, Rong Mengyuan re-emerged from his intellectual exile and re-established himself as an authority on Chinese historical archives with a prolific publication record in the 1980s. The new era saw a revival of academic interest in traditional China's indigenous commercial tradition and in the explorations of private merchant business archives, often filled with tales of valuable archives discovered or rescued by sheer accident while others were lost through continued neglect. 24 While generations of scholars are set to benefit from the re-discovery of TTS and other archives, when Rong Mengyuan passed away in 1985, he himself may have harboured no pride or interest in his connection with that pile of family archives he donated five decades earlier. It is curious to note that throughout the 1980s Rong Mengyuan remained a loyalist to an ideology of a bygone age and his writings then continued to be infused with the stridently leftist rhetoric of identity politics. In his 1983 book, he lamented recent attempts to revamp the reputation of Hu Shi as a scholar by reminding people of Hu Shi's past as a running dog of imperialism, feudalism, and bureaucratic-capitalism . --- Conclusion From beneath the small-font footnotes emerges an extraordinary living tale of a private merchant archive owned by an ordinary merchant family in 19 th century rural China. The journey of a pile of traditional archive materials through its initial donation, to subsequent anonymity and our rediscovery divulges a personal story of individuals surviving through contradictions, ironies and even betrayals. It is a tale of a nation caught up in a manifest destiny to confront 19-20 th century Western challenges, in the process of which she saw herself turned upside down several times over by the overpowering forces of ideology and politics and her historiographical traditions ruptured, re-joined, and sometimes reinvented. The story of the Tong Taisheng archive offers powerful lessons on the nature and quality of historical evidences -quantitative or otherwise -used in debates such as the Great Divergence. It is important to note that the 1930s Beijing which Rong Mengyuan encountered saw the high tide of modernization ideology and social engineering based on the tools of statistical and social surveys. 25 In this regard, the discovery, preservation and utilization of the TTS archive is no accident as the men whose hands had touched the archive, -Rong Mengyuan, Wei Zeying, and Yan Zhongping -came of age in a new intellectual era that found new value in a pile of old private merchant archives beyond mere personal and familial nostalgia. Ironically, it was this vision of social engineering pushed to the extreme by the Communist ideology of identity politics of the 1950s that returned the origin of the same archive into incognito. What happened in China during the 1930s and 1950s shapes and reshapes our vision of history and record before 1850s. Or alternatively, the visions and theories of history interfered with history. Hence, our knowledge of, and sources of evidence on, the past are shaped as much by how posterity studied the past as by the past itself -assuming there existed such an "objective" and "abstract" past. The preservation, compilation, utilization and ultimately the discovery or re-discovery of historical evidence are themselves profoundly dependent on the changing tempo of our research agenda, ideologies and paradigms. Large discontinuity and ideological reversals carry real consequences for comparative studies in the current Great Divergence debate. Even in the case of TTS which is "rescued" from anonymity, an entire three decades' worth of potential research scholarship was lost while the TTS remained largely unexamined, leaving Chinese economic history with a glaring statistical abyss especially with regards to the current Great Divergence debate. Furthermore, the introduction of new Chinese writing scripts and modern numeral and accounting system initiated in the early 20 th century New Cultural Movement and massively enforced through the PRC era meant have rendered materials such as traditional merchant account books far less accessible to an average contemporary researcher . 26 All these predispose our reconstruction of the past towards source materials recorded in the more familiar modern or -in the context of former colonies -"European" and colonial framework. These issues are not restricted to modern China alone, but rather are common experiences shared by nations that had experienced abrupt revolutionary transformations -the 18 th century French Revolution and 2 Indeed, good quality data for constructing basic Chinese economic statistics such as price indices or wages at the regional or national level for the 18-19th centuries remains wanting The only reliable benchmark, national level Chinese GDP, is for the early 1930s, see Ma --- 2008. 3 This argument is echoed somewhat by the existence of far richer statistical records for territories colonized by Europe than those untouched by colonization. See Mizoguchi and Umemura 1988 for Japanese colonial statistics of Taiwan and Korea. 4 The details of the accounts are presented in Yuan, Macve and Ma 2015 and also Yuan and Ma 2010. 5 See Brand, Ma and Rawski 2014 on problems with official data in Imperial China. 6 See Yuan, Macve and Ma for details on the account books. 7 In this regard, it provides a rare opportunity to study Chinese economic history on its own term or what Paul Cohen famously declared to "discover history in China", purged of the possible Eurocentric or "colonial" bias in area studies derived from Western language-based source materials or modern conceptual frameworks. 8 Ningjin County Gazetteer, vol. 2, pp. 25-27. most numerous ranging from 35% in the reigns of Jiaqing to 57% of the total number of firms in Daoguang , See Xu 1998, pp. 186-187. 10 On the donation of the TTS account, Wei added the remark that "…. after the Rong family business declined since the reign of Tongzhi , these account books covering several decades would have looked like a pile of waste papers to laymen or just good materials for wallpaper." Yet alas, continued Wei, "thanks to the conservative and "nostalgic" nature of our people, remarkably, this set of account books was preserved within the Rong family." 11 Rong was survived by four children. On May 3 rd of 2012, we interviewed his son, Rong Weimu, who is currently a senior researcher at the same Modern History Institute and also serves as one of the editors of Materials on Modern History" , the journal founded by his father. 12 The Rong genealogy noted in particular a member of the eighth generation "trudging through the muddy trading routes" to rebuild family wealth through commerce. 13 During the Guangxu reign , members of the 14 th and 16 th generation attained the much higher degrees of juren and jinshi within China's examination ranks. These may be signs that the Rongs were starting to gain a foothold in the higher echelons of the late Qing political hierarchy as attested to by a marriage liaison with a member of the lineage of Zhang Zhidong 张之洞 , one of China's most powerful officials of the era. Based on the Rong Genealogy and also our oral interview with Mr. Rong Weimu. 14 For Chinese elite strategies in traditional China, see the edited volume by Esherick and Rankin . For the importance of political status and civil service examination in imperial China, see Brandt, Ma and Rawski . 15 In an article commemorating Lu, Rong fondly recalled his encounter with his Marxist historian mentor, see Rong 1983. 16 According to Rong Weimu, Rong Mengyuan became entangled in a bitter dispute over the appropriation of a cave dwelling by Gao Gang 高岗 who was by then a very powerful Communist leader. The eventual intervention of Mao himself worked against Rong Mengyuan. As is well-known, Gao Gang himself became a victim of the first wave of Communist purges in the early 1950s. 17 For the rise of Fan Wenlan and his personal connection with Mao Zedong, see Li Huaiyin , especially chapter 3. 18 We have good evidence that Yan Zhongping or his team knew the Rong origin of the TTS archive. Yan Zhongping started working for the Social Science Research Institute of Academia Sinica in 1936, the same year as Wei Zeying. We find a research summary report published by Academia Sinica in 1936, which listed research on TTS merchant accounts as one of their forthcoming projects, see Academia Sinica 21 According to Rong Weimu, Rong Mengyuan's father, Rong Xinghuan 荣星桓, had been a sympathizer to the Communist cause in the early 20 th century and sheltered the Eighth Route Army. After the Communist victory, Rong Xinghuan was classified as a "landlord". In a reversal of fortune, the long-term labourer who once worked for the Rong family became a party official with a glorious 32 years of Communist party membership. But in the new China, the labourer looked after the elderly Rong Xinghuan, apparently to repay the past kind deeds of his former landlord in the old days. 22 See Yan 1956. It is also interesting to note that Yan actually went to the UK in 1947 on a scholarship for three years, where he systematically collected a large amount of English language materials related to the Opium War. In 1950, Yan returned to the new China with all these materials but was only able to make limited use of them. 23 The most dramatic case is that of Rong Yiren 荣毅仁,the son of the illustrious Rong brothers who were China's legendary industrial tycoons in pre-Communist Shanghai, hailed as the "King" of cotton and flour, the symbol of modern Chinese industrial entrepreneurship. After two decades of lying low as a denigrated former capitalist, Rong Yiren re-emerged from the 1980s as China's new patriot entrepreneur and rose to the political rank of Vice President of the nation . It is possible that the Rong Mengyuan lineage were also distantly related to the lineage of the Rong brothers who could be traced back to Jining 济宁 of Shandong province and had migrated to Wuxi of Jiangsu province in ancient times. Based on http://baike.baidu.com/view/680816.htm accessed Sept. 13 th 2013. 24 The case in point is the massive Shanxi merchant archival volume compiled by Huang Jianhui . He recounts how pages of the original account books of China's first Shanxi banking house, Rishengchuang -now proudly displayed in the popular Shanxi bankers museum in the city of Pingyao , Shanxi province -were rescued in 1995 from the wallpaper used in the original site, which had fortunately survived the radical Cultural Revolution era utilized what seemed a large collection of merchant account books of a fuel store near Beijing roughly for the period of 1790-1850, almost identical to that of TTS account. Unfortunately, the existence and location of the original accounts remain unknown. See discussion in Allen et al. 2012. 26 See Kaske 2007 for language reform in modern China. 27 For a vivid illustration of how our knowledge is shaped by archival survival, see the example given by Stephan Schwarzkopf : "Take as example the well-organised and well-funded archive of the J. Walter Thompson advertising agency , for a while the world's largest ad agency, at Duke University's Hartman Centre for Sales, Advertising and Marketing History. Almost all parts of the collections there are searchable to file level, many items have been digitised, and the archive gives generous bursaries to international scholars. The archive is conveniently located on a beautiful university campus in North Carolina, where people play golf ten months of the year. The sheer availability and convenience afforded by the JWT collection feeds into a discourse and a set of historical narratives which privilege American marketing and advertising expertise over that found elsewhere in the world. Put simply, if one only studies existing archival sources which are provided, cared for, sponsored and promoted by American organisations, then the course of global marketing history indeed appears to be dominated by American organisations. " --- The Fourth Column : the ending balance is known as shizai .
The Tong Taisheng (统泰升) merchant account books in Ningjin county of northern China in 1800-1850 constitute the most complete and integrated surviving archive of a family business for pre-modern China. They contain unusually detailed and high-quality statistics on exchange rates, commodity prices and other information. Utilized once in the 1950s, the archive has been left largely untouched until our recent, almost accidental rediscovery. This article introduces this unique set of archives and traces the personal history of the original owner and donor. Our story of an archive encapsulates the history of modern China and how the preservation and interpretation of evidence and records of Chinese economic statistics were profoundly impacted by the development of political ideology and in modern and contemporary China. We briefly discuss the historiographical and epistemological implication of our finding in the current Great Divergence debate.
Introduction Ongoing and emerging health challenges such as infectious disease epidemics, bioterrorism, antimicrobial resistance, and natural disasters require a coordinated response from a highly diverse, collaborative, and trained health workforce. "One Health" is a concept and approach intended to meet such demands. Though loosely defined, a broadly accepted definition of One Health describes it as "the integrative effort of multiple disciplines working to attain optimal health for people, animals, and the environment [1][2][3][4] World Organisation of Animal Health , n.d.; World Health Organization, n.d). A One Health approach recognizes that complex health challenges are beyond the purview of any one sector or discipline working in isolation [5] and that a resilient health workforce must be capable of effective and collaborative prevention and detection of, as well as response to emerging health challenges. A One Health approach, therefore, calls for collaboration across disciplines, sectors, organizations, and national borders in support of increasingly complex health challenges [1][2][4][5][6][7][8]. While One Health advocates increasingly support collaborative and multi-sectoral approaches to health challenges, no common language or metrics exist to uniformly describe and evaluate such efforts. Few studies explicitly analyze the factors and conditions that support effective One Health practices and collaborations. This hinders the ability of health professionals to learn from past experiences and improve upon current and future One Health policies, partnerships, and practices. This paper seeks to address this gap by analyzing and identifying factors that enable effective multisectoral collaboration and response to health events. In this study, a multidisciplinary team of researchers reviewed a broad scope of literature describing collaborative and multi-sectoral approaches to past health events to understand how such collaborations are commonly described and evaluated and to identify and synthesize enabling factors for One Health collaborations. This paper identifies twelve factors related to effective One Health implementation and collaboration and concludes with a proposed framework for evaluating future multisectoral One Health collaborations. The ultimate aim of this work is to support and improve multisectoral preparedness and response efforts. --- Background on One Health Although its conceptual foundations date back hundreds of years, the formal global health construct known today as One Health wasn't officially recognized by international and scholarly bodies until 1984 [8]. The HIV/AIDS pandemic in the 1980s and the Hanta virus outbreak in 1993, made clear that emerging disease threats can cross national borders, cultures and species. With that came a broader recognition that animal and zoonotic diseases pose a serious threat not only to human health but to global health security. Policy makers and health practitioners looked to collaborative health efforts as a response to these emerging challenges [3]. The subsequent decades which were marked by unprecedented global interconnectedness and human mobility [9] were associated with threats to global health security, including manmade threats, such as the use of anthrax as a bioweapon, and emerging diseases like SARS and avian influenza. These challenges necessitated the need for a more formal coordinated action from countries, regions, and the global health community at large to address such health threats. In order to address the afore-mentioned challenges, there have been emergence of major health initiatives and frameworks such as the Global Health Security Agenda, the International Health Regulations-Joint External Evaluations , the World Health Organization -World Organisation for Animal Health Operational Framework for Good Governance at the Human-Animal Interface [10], and the World Bank's One Health Operational Framework [11]. A common thread among these initiatives is the emphasis on multisectoral and transdisciplinary collaboration and a call for strengthening human, animal and environmental health systems through a One Health approach. The global health community, including those already engaged in One Health, continue to grapple with the fundamental questions of what characterizes a successful One Health approach, including how to set goals, establish frameworks, facilitate collaborative work, and how to process and measure outcomes [12]. Efforts to measure One Health programmatic outcomes and operations are necessary for the improvement of collaborative efforts. This article supports such efforts by 1) identifying key factors that support effective collaboration around health events and 2) proposing a framework for documenting and evaluating One Health collaborations in a more uniform and systematic manner. --- Conceptual framework: Understanding One Health collaborations Collaboration is an inherent and explicit part of the One Health approach which calls for the active engagement of institutions, managers, and health practitioners across disciplines and sectors [1][2][3][4]. Despite widespread recognition of the importance of a One Health approach, there exists a gap in the literature regarding what constitutes a successful One Health collaboration. This study draws upon the existing public affairs literature on collaborative, or 'crossboundary' collaborations to understand which factors enable successful collaboration around health events. Review of the literature on collaboration. Scholars of public policy, organizational partnerships, team science, and multisectoral collaboration have produced a series of theoretical frameworks to describe cross-boundary collaborations and identify which practices make them successful [13][14][15]. The focus on collaboration and partnership is not unique to any one discipline, yet there is very little cross-fertilization of research across disciplines. This research builds upon the existing literature on cross-boundary collaborations and applies it to One Health collaborations. The conceptual framework for this study focuses on three critical phases of a successful cross-boundary collaboration: adequate starting conditions, an effective process of collaboration, and attention to the outcomes of collaboration [16][17][18][19][20]. Starting conditions. There is a general consensus in the literature on cross-boundary collaborations that starting conditions-the conditions in place before any collaborative process begins-impact the process, structures and outcomes of collaborative engagement. These include prior history , the environment , and relational dynamics [16,17]. The presence or absence of such conditions influences successes and challenges encountered during the collaborative process. Process. Beyond starting conditions, many scholars point to the process of collaboration itself and the structures in place to support effective collaboration [13,14,20,21]. Although the terms used for collaboration vary, scholars focusing on the process of collaboration point to the importance of leadership, shared goals, trust and mutual understanding, institutional structures and resources, communication, and data management. Measuring outcomes. A review of the literature on collaboration suggests a lack of validated metrics for measuring collaborative effectiveness and performance. Several scholars of cross-boundary collaborations, citing works published between 2005 and 2019, highlight the importance of measuring the outcomes of collaboration and lament the challenges of describing and evaluating collaborations in a uniform way [12,14,16,17,20,[22][23][24]. This underscores the importance of understanding which factors support collaborative efforts, and how teams can evaluate their performance and outcomes in association with these factors. The literature on cross-boundary collaborations and its attention to the starting conditions, processes, and outcomes of collaborative approaches have informed this study on the factors that enable effective One Health collaborations. The following questions guided this study: What factors enabled two or more disciplines or sectors to collaborate effectively in a health event? --- Methods --- Scoping review A scoping literature review was conducted to identify key factors that facilitate multisectoral collaborations around major health events such as disease outbreaks using published accounts of actual health events. A scoping review, in contrast to a systematic review, is well-suited for a field such as One Health that is still relatively new and evolving, as the method allows for assessment of emerging evidence, as well as a first step in research development [25][p. 12]. Due to the lack of a common language and framework for describing One Health collaborations, this scoping review builds that foundation by providing a broad overview of One Health collaborations and supporting the synthesis of key concepts, evidence, and research gaps [26,27]. The scoping review was initiated by a multidisciplinary team in January 2017. The team members were composed of individuals with expertise in veterinary medicine, public health, public policy, organization and management leadership studies, international development, monitoring and evaluation, and education. Because the researcher is central to the methods and analysis of qualitative research, it was important to select a transdisciplinary research team that could work effectively to address the research questions and to illustrate the disciplines that were represented in the transdisciplinary approach employed for this scoped review. Selection of relevant articles. The search included peer-reviewed articles available todate in the U.S. National Library of Medicine's PubMed database that were searched using specified MeSH terms. Although the multidisciplinary research team has extensive experience in One Health, they were not trained in sensitive search strategies [26]. The research team thus elected to work with a University of Minnesota research librarian to develop MeSH terms for this study. Table 1 provides a list of the key terms used to identify articles discussing multisectoral health events and collaborations. To avoid tautology, it was a deliberate decision to not use "One Health" as a search term. Instead, drawing upon the researchers' extensive experience in One Health, various terms were used to describe One Health and similar multidisciplinary and cross-sectoral health collaborations. The underlying assumption was that any articles explicitly addressing One Health would be captured using these key terms. This initial MeSH search identified 2,630 non-duplicated articles. This scoping review was an inductive study of the literature and was conducted in order to support more hypothesis driven research for One Health. By design, the authors elected to limit this literature review to the PubMed database at the outset of the study. PubMed is peer-reviewed and peer-led database. Articles are selected and included based on scholarly and quality criteria by literature review committees and are tagged by keyword and by article structure, contributing to more accurate retrieval than other databases ; accurate retrieval supports the search results are reproducible and reportable, which is critically important for a scoping review of the literature in which it is important for other researchers, no matter their location, to repeat the study. The decision to use one database reflects the exploratory nature of this study and the Author's intent to propose further hypothesis-driven research that may include additional databases. This methodological choice is in line with Arksey and O'Malley who attest that decisions must be made at the outset of a study to clarify reasons for the coverage of the review in terms of time span and language [26] [p. [23][24]. Initially, citations and abstracts of these articles were screened in two phases. The articles were reviewed for inclusion based on the criteria outlined in Table 2. In the first screening, 179 abstracts met initial inclusion criteria and full articles were procured and reviewed. In the second phase of screening, two further criteria were included to better achieve scoping review objectives. The research team divided into transdisciplinary pairs which included a reviewer from the health sciences and one from the social sciences. Each of the articles that met the initial inclusion criteria were divided among the team members and then independently reviewed according to the modified screening criteria. Articles were included if both reviewers agreed that they met all initial requirements. In instances where the transdisciplinary reviewers did not agree, the articles were brought to a full research team meeting and reviewed jointly until consensus among all researchers was achieved. This same method of collaborative review was used for the second round of screening and resulted in 50 articles for the final analysis. The PRISMA diagram below illustrates the article search, screening, and review process. Analysis. The interdisciplinary team conducted an analysis of the 50 articles that explicitly addressed multisectoral collaboration in response to an actual health event. Each reviewer coded approximately 5-10 selected articles using the qualitative data analysis software, MaxQDA [28]. Descriptive codes were identified in advance to ensure that baseline data reflected the One Health aspects of the articles reviewed. All other codes emerged from the data using a grounded theory approach [29,30]. Preliminary and axial coding procedures are outlined in the following section and ensured that inductive and deductive thinking could be related. Preliminary coding. A set of predetermined, descriptive codes were used to denote the location and nature of the health event in the articles, including specific infectious agents, relevant disease vectors or hosts, and the various entities involved in the collaboration. Each paper was coded for the predetermined codes outlined in Table 3. Predetermined codes were also used to identify the entities involved in each health event response. The team used the code "roles" to identify individuals or groups who participated in the coordinated response in a formal role based on individual expertise and formal training. While many of these roles represent professions in the health sciences, this category also included representation from the social sciences, media and community relations, government, and engineering. Other articles focused on types of training, identified by the research team as "disciplines," , or specific professions . A third type of classification in the literature was more general categorization of sectors involved, such as the traditional designations of Public, Non-profit, and Private/For-profit. Axial coding. Axial coding was used to construct linkages between "data sets" or, in this scoping review, articles regarding intersectoral collaboration. Axial coding is a qualitative research technique to relate data together in order to reveal codes, categories, and subcategories, as well as patterns in the data [35]. This grounded theory is an iterative process that combines inductive and deductive thinking. Each article was first coded to identify any area of text where authors analyzed collaboration around a specified health event. In this process, it became quickly apparent that the review team would need to differentiate between actual and hypothetical forms of collaboration reported. All articles included in the analysis at this stage were retrospective analyses of actual health events, yet many were actually prospective in their analysis and discussion. As an example, several of these articles included suggestions based on what should happen in an ideal scenario, rather than what occurred in practice, thus leaving out key details of the actual event. Therefore, a first round of organizing codes differentiated between collaborations that actually happened versus ideal scenarios and hypothetical lessons, allowing the research team to focus the analysis on what actually happened . The text was further coded to reflect whether the authors were reporting a success factor of collaboration, or a challenge of collaboration. Both the successes and challenges reported in the literature were related during the grounded theory thematic analysis and informed the final thematic results reported. After the first round of axial coding was conducted to organize the data, the authors employed a deductive framework developed from the review of literature on multisectoral collaboration [13]. Aligned with this framework, the research team distinguished between starting conditions for collaboration, the process of collaboration itself, and the outcomes of collaboration . Finally, the review team re-examined the passages coded as "actually happened" and "successes". These codes were then related to the deductive codes of starting conditions and process-based factors. An Excel table was used to organize axial codes into a table of final results. Limitations. The primary limitations of this scoping review are three-fold. First, the literature analysis relies on peer reviewed publications alone, which may have underrepresented collaborative efforts that are more commonly encountered in grey literature. Future work may be expanded to include these types of sources. Second, there was no consistent framework or language for reporting the successes or challenges of collaboration, and thus, important content may have been missed during the search and review [36]. The scoping review team tried to overcome this with two strategies, which included building an expanded list of search terms and conducting an iterative review process using two independent transdisciplinary reviewers. Both methods offset this limitation and might have minimized the likelihood of missing specific content. Third, the researchers could not identify specific metrics for evaluating performance and collaboration in the literature. This meant that an evaluation baseline was not present. However, the research team believes that the final subset of articles represents a diverse crosssection of transdisciplinary efforts around emerging health events. --- Results The scoping review yielded 50 peer-reviewed publications explicitly addressing multisectoral collaboration in response to an actual health event. This section describes the nature of the One Health collaborations analyzed as well as the various factors that enable One Health collaboration. --- Descriptive results Types of One Health events analyzed include natural disasters, infectious disease outbreaks, endemic disease, bioterrorism, and biosecurity preparedness. In each of these cases, the underlying multisectoral collaboration was either a preparedness or response effort. The sample included One Health events from around the world. Most articles addressed health events in Europe/Eurasia , the Americas , and Asia . Less represented in this sample were health events taking place in Africa , Oceania , and the Middle East . Most health events involved a specific infectious agent , while the remaining 3% focused on infectious disease challenges such as hospital infections, pest management, or tsunami response. A total of 67 different infectious agents were coded. Among the infectious agents identified, 58% were bacterial, 40% were viral, and 2% were protozoal . 39% of these agents primarily affect humans and 33% are predominantly animal-related. 16% of the agents were food and water-related, 10% were insect related, and an additional 2% were related to environment. Overall, 60% of the infectious agents were considered zoonotic, meaning they spread between humans and animals. Relevant disease vectors or hosts represented in more than one publication included bats, cattle, poultry, horses, swine, humans, mosquitoes and midges. Involved parties or entities played varied roles and represented diverse disciplines and sectors, as illustrated in Table 6. --- Thematic results Thematic findings are presented according to the One Health collaboration framework, which distinguishes between individual, organizational, and network factors that enable multisectoral and transdisciplinary collaboration at the onset and in the process of addressing a One Health event. The team ultimately created organizing categories that reflected the individual, organizational and network levels of collaboration . These categories were informed by a review of the literature; for the purposes of this discussion, the definition of network is provided by Emerson and Nabatchi [18], and is defined as "the processes and structure of public policy decision making and management that engage people constructively across the boundaries of public agencies [organizations], levels of government, and or the private and civic spheres in order to carry out a public purpose that could not otherwise be accomplished," [p.2]. Within each level, the review team created groups of subcategories to further organize codes. The research team identified 12 key factors that support successful multi-sectoral collaborations around major health events. At the individual level, these factors include 1) education & training and 2) prior experience & existing relationship. Organizational factors include 3) organizational structures, 4) organizational culture, 5) human resources, and 6) communication. Finally, network-level factors include 7) network structures, 8) relationships, 9) leadership, 10) management, 11) available & accessible resources, and 12) the political environment. These final individual, organizational and network factors were then further characterized according to their relevance at the start of a collaboration "starting condition" or during the process "process-based" factors of collaboration. The researchers identified that the organizational thematic factors were relevant to both starting conditions and process-based factors so were not separated. The final results of this literature review are thus presented in Table 8. The researchers also coded each paper for outcomes. Of all the articles coded, only 4 articles reported on outcomes of collaborations. The outcomes reported included: cost reduction; decreased mortality; decreased morbidity, multisectoral development opportunities resulted from the collaboration; Improved safety; effective use of available resources. --- Table 8. Final axial coding process included both inductive and deductive codes and reflects emerging themes for successful collaboration. --- Levels Starting Condition Factors Initial deductive code Process Factors Initial deductive code Individual --- Individual Factors Education and Training Preemptive technical training/ continuing education [37][38][39][40][41][42][43][44][45] Disease specific technical training [34,45,46] Preemptive collaborative training [47,48] Strong public-sector led training [39] training and capacity building provided a platform for better collaboration for outbreak [49] NGOs support gov. through staff training [50] Participatory epidemiology training [51] Prior Experience & Existing Relationships Pre-existing multisectoral relationships [45,[52][53][54][55] Previous experience collaborating for health events [34,56,57] --- Individual Factors Ad hoc "just-in-time" training Shared training & organizational alignment; aggressive, rigorous, just-in-time, and critical trainings for key positions and critical events with monthly follow-up meetings to support compliance [31,58] Training and capacity building provided a platform for better collaboration for outbreak response [49] Instituting multisectoral disease specific training; Ongoing training-for new and existing systems [39,45] Organizational --- Organizational Factors : Structures Policies and Protocols Shared response guidelines [42,50] Structures frequently included policies/protocols [59,60] Reporting -Management protocol -Task Management -Response Plan -Communications/ communication strategy [34,61] Infection planning, control and traceback procedures [62] Systems Reporting, laboratory systems [59] Surveillance systems [41,58,59] Planning; Iterative Improvement of systems [46,48,60] Information management system/ database [41,48,63,64] Information Sharing [45,48]) Tool sharing during response [65] Lab systems in place [59] Online system for HR recruitment [45] Intentional multidisciplinary engagement, collaborative capacity [43,48,66,67] Standard operating procedures [55] Interoperability [42] Needs assessment and prioritization [38,48] --- Culture Leadership, accountability, ownership, trust, transparency of processes, systems based thinking, cultural awareness and engagement Leadership to support the iterative and developmental review of collaborative processes [58] Strong, engaged Leadership [32,35,52,68] Accountability; Ownership [67,68] Cultural Engagement; Engagement; Diversity/ Involvement of community [67,69] Trust [38,41,49,70] Transparency [31,34,61] Need to understand each other's' processes [53,70] Systems based thinking/ approach [34,48] Cultural awareness; engagement of diverse stakeholders to reflect community needs [53] Credibility [38] --- Human Resources Prior Experience & Relationships Existing Relationships [49] Institutional Knowledge [31,45] Revise and revisit mandates based on lessons learned [37,71] --- Staffing/ Roles & Responsibilities Clearly defined roles and responsibilities [35,42,65] Resources available and accessible [35,45] Informed staff/ staff are aware of systems in place, increased engagement of staff [31,45] Reflexive workforce Reflexive Human Resource Protocol to ensure positions are adequately filled & workers are incentivized [31,57] Reflexive approach [31,45] Adaptability to rapidly changing context [42] Rapid start-up response; shared response guidelines [42] --- Network Factors Network Structures Structures & Coordinating Mechanisms Multi Sectoral Coordinating Mechanisms/ platforms for engagement [34,41,45,52,60] Memoranda of Understanding, Terms of Reference or bilateral agreements to support the development of existing relationships that promote ongoing engagement [41,45,48,72] Use of the Incident command system [60] Creating shared protocols-platform for scientific engagement, information/ tool sharing during response [65]Reporting structure [49,60] Creating shared protocols [45] Policies -Institutional-nation-nation/ regional agreements [45,49,58,72]Basic public health and infection control measures including contact tracing, infection control procedures, and quarantine [62] Joint tasks forces and bilateral agreements ie. the crossborder task force and bilateral agreement between public hospitals [42,48,72] Jointly developed procedures to ensure coordinated investigation and cross-sector data exchange [72] Presence of Lead agency/ Task Force [41] Establish committees/ subcommittees [48,73] --- Established Roles & Responsibilities Clearly defined and previously established roles and responsibilities [34,42,65,72] Establish a framework with clearly established partnership roles and responsibilities [42] Identification of an inter-agency/ interdisciplinary liaison [31,73] --- Network Relationships Preemptive Planning Preemptive planning for potential disease threats [45] Creating common goals across the network [47] Setting goals [34] Local preparedness and logistics [43] Relationships & Partnerships Established/ preexisting relationships & partnerships [45,55,74] Established forum for information sharing, developing relationships, building capacity [49] Partnerships with clearly defined roles and responsibilities [40,42,49,54] Partnerships include public-private partnerships [49], NGO and donor partnerships [42], training and capacity building partnerships [40] Partnership with community centers that work with vulnerable populations [59,75] Partnership with external/ global organizations to support response [62,65] Partnership with experts [56,61] Partnership with patients and their families [35] Linking researchers with community representatives [51] Public-private partnerships/ public engagement [39,43] Diverse/Inclusive Stakeholder Engagement Cultural awareness/engagement/diversity and community engagement [53] Need "an expanded network of partners that includes full representation from all regions, and possibly other disciplines" [37] Diverse representation and inclusion within collaborative platforms/networks [37,45,52,56] Existing Resources Human Resources/Skilled Professionals Resources available and accessible, including Human Resource allocation and existing relationships [35,44,45,54,77] Reposition of supplies to high risk areas [41] Financial Resources/Funding Access to regional and international investors [49] Third party coordinating supported public-private mixed projects with financial support [31,39] Political environment Political will to aid in the development/ institutionalization of effective collaborative structures [41,48,65] Political support for empowered decision making [72] Network Factors Network Leadership Support networks to identify a lead agency [41,52] Promote information sharing and joint decision-making across the network [49,60,65] Joint decision making, joint planning [60] Strong and engaged leadership [52] Multisectoral partners worked together for a common goal [47] Strategic risk communication with leadership [45] Network Management Task Management Task/ Case Management through MCMs [41] Convene regular multi-sectoral meetings [53,58,60] Shared response guidelines [52] Management protocol [58] Rapid startup response [42] Technical discussions held with community to support management systems [51] Awareness Awareness of systems in place, education/awareness, coordination, multidisciplinary info/ data sharing [31,38,44,55,60,70] Increased engagement [31,45] Joint/coordinated public communications [60,70] Health threat communication includes early notification [49] Team/Internal communication includes data and information sharing [41,76] Public communication includes public awareness [54] Public release of risk analysis reports [77] Joint interviews with stakeholders [70] Finding common ground especially in regions of conflict to ensure health equity [49] Sharing perspectives [53] Behavior change communication [41] Effective information dissemination Communication Characteristics: frequent and honest [44,45] Timely; Consistent [45]; Reflexive/ flexible [59]; Iterative feedback [53]; Clear purpose [31,44,70]; Prioritized [riskbased] [45] Trust [49]; Interdisciplinary [31,53]; Contextualized [51]; Streamlined [54,70] Methods: Communication through MCMs-pre-meetings, data collection and sharing, forum for info sharing [48,58] ICS methods supported multisectoral communication/ effort [60] Regularly scheduled meetings/ Multidisciplinary meetings established/Follow-up meeting [43,48,53,58,60] Established clear lines of communication [31,43,51,77,78] Diversity of methods and platforms such as press briefs, websites, tv, newspaper, teleconferencing, listserv, available contact list, local/ regional/ cross-border meetings, periodic reporting [44,45,49,53,58,62,77,78] [38] Ongoing Stakeholder Engagement Engagement of diverse stakeholders to reflect community needs [53,75] Community engagement around prevention and control activities and biosecurity measures [51] Bottom-up approach with involvement of all levels/ Champion/advocates [52,55] Action plans were agreed to with the community] needs ie. planning and implementation [51] Public, community, local authorities, govt agencies, NGOs, patients [45,49] Public health agencies/programs, travelers, global partners, federal and non-federal agencies [45] Civil-military; military/ foreign military involvement provided necessary support for other sectors [39,71,79] --- Monitoring and Evaluation Monitoring goals [35] Iterative review of collaborative processes [55,60] Monitoring and evaluation to show the outcome of interventions as beneficial or not [31,45,48] Research to understand outreach effectiveness [38] --- Resource mobilization & allocation --- Material distribution Established supply location [standardized, accessible, risk-based strategy); Subcommittee assigned to monitor supplies [41] Accessibility, standardized location, allocation, flow, product deployment [34,68,80] Human Resource mobilization Reflexive HR Protocol to ensure positions are adequately filled and that workers are incentivized [31,57] Additional military support allowed struggling organizations to leverage support and stay involved [71] Online recruitment [45] https://doi.org/10.1371/journal.pone.0224660.t008 --- Discussion In this discussion, the research team suggests 12 thematic factors that may be used by practitioners involved in One Health activities to more systematically assess the successes and challenges of multisectoral collaboration, including those contributing to successful outcomes. Further research is needed to refine and validate these factors and ultimately support more uniform and rigorous assessments of One Health collaborations. --- Collaborative success factors categorized as starting conditions or processbased factors The axial coding process allowed for factors reported to facilitate or discourage successful collaboration to be categorized as either a relevant starting condition of collaboration, or as relevant to the process of collaboration. During the data analysis, certain themes within each category of individual, organizational and network factors emerged as relevant to "setting the stage" for effective collaborative processes, while other factors were essential to maintaining the process of collaboration itself. The researchers found that this distinction was critical in our understanding of how successful collaborative processes are initiated. The starting conditions presented in this paper represent the collaborative preparedness and planning necessary to support effective One Health processes. In addition, the process of collaboration allows for the emergence of new ways of collaborating. This symbiotic relationship between starting conditions and process, allows us to view the entire system of collaboration. In this system, starting conditions influence the process of collaboration, and the process itself can lead to improvement of structures and processes that will now inform improved starting conditions. This cyclical and emergent process is inherent in collaboration and must be accounted for when considering evaluation and systems-based improvements. Individual factors. Relevant success factors at the onset of a One Health event include an individual's education and training, as well as prior experience and existing relationships. Many authors identified existing or previous education and training as enabling factors for collaborative success [37,[40][41][42]44,47,49,51]. Formal technical education and training of individual workers prior to a health event was critical to prepare the necessary human resources for response efforts. Authors noted that foundational technical training during an event was often not possible [41,42], but that preemptive and collaborative planning did support the development of key relationships, and in some cases, the development of shared protocols used in the response. The absence of formalized training opportunities before an event, both individual technical training and collaborative, were frequently reported as a gap and a challenge to effective One Health response [40][41][42]49]. Shared competencies were suggested as a strategy for standardizing protocols and performance across multiple individuals and organizations [49]. Multiple sources also reported the importance of prior experience in collaborative response efforts and how this established existing relationships to support the work, both formal and informal [34,45,52,53,56,57]. When instituted before a health event occurs, these starting conditions were reported to support a more effective collaboration processes. Individual factors that supported the process of collaboration were most frequently reported as workers having access to necessary education and training that was available ad hoc or as "just in time" training to support operations during the health event. Examples reported include the use of shared training across organizations to additionally support institutional alignment and partnership with community-level organizations to provide training [39,42,49]. Many of these trainings were reported to be rigorous and responsive with continuous follow-up to support compliance [31,45,58]. Williams et al [45] discussed how ongoing multisectoral disease specific training supported workers to operate within new and existing systems while simultaneously sharpening their technical competence. These training and capacity-building opportunities were reported to provide a platform for better collaboration for outbreak response [49]. However, ad hoc trainings do not replace or diminish the need for foundational technical training, as formalized education and training were reported as a critical starting condition to facilitate quick mobilization in the case of a health event. Our literature review uncovered the role for both strong university-based education and training, and the role that ad hoc or "just-in-time" training can play to meet immediate operational needs during process-based response. Organizational factors. Factors reported to enable organizational-level collaboration were broadly applicable to both the starting conditions and the processes of collaboration. Organizational structures, culture and resources were cited as important elements for creating an enabling environment for effective One Health collaboration. The organizations serve to connect the individual worker with a network of One Health actors. The organizational structures that support collaboration were often discussed as a success factor. These structures include, but are not limited to, the policies and protocols or systems established within organizations to support technical implementation and collaborative efforts. Policies and protocols reported to be supportive included technical guidelines and standard operating procedures, as well as management, response and communication strategies and protocols [34,42,50,59,60,62]. In addition, organizations reported the need for functional systems for information and resource management and sharing and reporting both surveillance and laboratory results [41,43,45,48,55,59,63,64,66,67]. These systems were reported to benefit from being adaptive, flexible/reflexive and improved through iterative feedback and monitoring and evaluation [38,46,48,60]. Organizational culture was reported in multiple key areas [31,35,38,41,48,49,60,61,[68][69][70]81]. The role of organizational leadership was discussed at length in many of the reviewed publications. Authors recognized and identified the importance of having strong and engaged leadership [31,34,52,68] and the need for leadership to support the iterative and developmental review of collaborative processes [60]. In addition, organizations benefited from having a culture that supported accountability, ownership, cultural engagement and diversity [53,68]. Trust and credibility were consistently reported as a key element of organizational success [38,41,49,70], as was the need for both an understanding of each other's processes and systems based thinking [34,48,53,70]. Authors reflected on the importance of cultural awareness, transparency of communication processes [31,34,53,61] and the engagement of diverse stakeholders who were able to reflect community needs [53]. Human resource-related factors appeared in all three levels of analysis. Research suggests that workers need to be trained at an individual level, have defined roles and responsibilities at an organizational level, and need to be able to mobilize their efforts at a network level. At an organizational level, Human resources are made up of individual contributors and also function as collective entities that reflect employees' prior experiences, existing relationships, and the collective institutional knowledge of its members [31,34,45,49] serve to benefit the organizations in which they work. Clear roles and responsibilities were consistently reported [34,42,45], as well as awareness of systems in place to support ongoing engagement, operations and information sharing [31,45]. Additionally, several authors highlighted the importance of a reflexive workforce, i.e. human resources that were readily available and could be mobilized quickly and efficiently to respond to health event in a rapidly changing context [31,42,45,57]. Network factors. Starting condition factors reported to enable collaboration at the network level included network structures, existing relationships, available resources in the face of a health event, and the political environment in place to support these efforts. Pre-existing network structures were reported to provide a foundation for effective collaborative efforts to occur across participating organizations. Established Multisectoral Coordinating Mechanisms , also referred to as One Health platforms or joint task forces, were often reported as key to assisting with collaboration across a network [34,41,42,45,48,52,60,72]. Organizational and network structures provided operational standards that crossed relational and organizational boundaries at all levels of the system-individual, organizational and network-which supported the development of formal relationships at each level. Analysis suggested that these systems and relationships need to be in place before the health event. MCMs provide a formalized operating foundation in which organizations and individuals could contribute, and formalized roles and responsibilities supported effective human resource mobilization in both organizations and networks [34,42,45,72]. These structures were often supported by formal policies or agreements such as bilateral agreements or Memoranda of Understanding [41,45,72]. In addition, operating procedures such as the Incident Command System also supported effective mobilization of multiple organizations within the MCM [60]. Finally, the importance of formal structures were repeatedly emphasized as a response to "lessons learned" during challenging responses. On the contrary, lack of existing structures was reported to prevent efficient multisectoral engagement in the preparedness and response to health events [37,42,71]. Several sources indicated that reporting structures and policies at local, regional, national and international levels support continuity of response and effective implementation in response to health events [45,48,49,58,60,62,65,72,73]. These reporting structures and policies allowed for information flow between stakeholders, and the coordination of response efforts across a diversity of individuals and organizations participating in preparedness or response efforts [31,49,58,65,74]. Established structures created a foundation for network relationships that support effective outbreak response to a health event [31,40,42,45,47,49,54]. Development of formal and informal relationships prior to a health event allowed individuals, organizations and networks to more effectively respond once an emergency arose. The existence of structural agreements in any form such as MCMs, MOUs, shared Standard Operating Procedures or bilateral agreements were reported to support the further development of existing relationships to promote ongoing engagement prior to and throughout a health event [55,74]. Preemptive planning for potential disease threats was reported to strengthen connections and relationships and support multisectoral disease training, sometimes leading to shared protocols [45]. Additionally, the creation of common goals [34,47] and clearly defined, previously established, roles and responsibilities for individual actors and network partners were reported as necessary in network operations [34,40,42,45,49,54]. Cultural awareness and the inclusion of diverse stakeholders from government, nonprofit, and private sectors from the national to community level, was consistently reported as a success factor for collaborative efforts if included from the start [33,34,37,42,51,52,56,59,61,75]. Availability of resources, including human resources, that can be easily and efficiently mobilized in a health event was considered an important factor for response [31,34,37,41,44,45,49,52,54,74,82]. Authors also noted the importance of a supportive political environment to aid in the development and institutionalization of effective collaborative structures [41,48,65,72]. A supportive political environment was reported to enable the flow of available financial, human and material resources and empowered decision making [72]. Readily available resources supported rapid mobilization of collaborative efforts when a health emergency occurred. This is particularly impactful given that the absence of these resources and actions was noted across the literature as challenges to effective health response. Network leadership and management processes were critical to effective multisectoral response efforts. Leadership engagement during a health event allowed for the mobilization and needed support for management processes. By utilizing existing structures and decision-making power, leaders and lead agencies can support managers and the process of management across organizations and networks. Emergency response protocols, such as the ICS, were frequently reported as mechanisms to this end, by concretely providing a leadership and management structure to support ongoing multi-organizational response. It was particularly useful for identifying a lead agency and establishing structures for regular meetings and communications. In the process of collaboration, relevant network factors included network leadership, management, and the effective and efficient mobilization of resources for response. For example, strong and engaged network leadership was noted as an important success factor for collaboration. When established prior to a health event, factors reported to support network collaborations included identifying a lead agency [41,52], promoting information sharing and joint decision-making across a network [49,60,65], and convening regular multisectoral meetings [53,58,60]. In addition, strong leadership was integral for strategic risk communication across the network [45]. Effective network management during an outbreak was reported in the areas of management practices, monitoring and evaluation , communication, awareness and ongoing stakeholder engagement. Management practices included case and task management through the MCM [41], regularly scheduled meetings [53,58,60] and development of shared response guidelines and management protocols across the network [58,81]. These management practices, when paired with existing structures, can support rapid start up response in the face of health events [51]. Monitoring and evaluation allowed for the iterative review of the collaborative processes during response efforts, as well as the outcomes of the collaborative process [31,34,38,45,48,55,60]. Monitoring and evaluation was reported as integral in being able to show the outcome of interventions as beneficial or not [31,38,45]. The importance of communications cannot be overemphasized and was repeatedly reported as an integral factor for building relationships, trust and supportive organizational culture, and for contributing to effective response processes. Both the characteristics and the methods of communications were highlighted as important. Characteristics of successful communication included the need to be frequent and honest [44,45]; timely and consistent [45]; reflexive and flexible [59] and prioritized [45], and streamlined [54,70]. Additionally, characteristics included the need for communications that build trust [49] and have a clear purpose [31,44,70]. These elements were widely reported to support effective communication within and across organizations [31,41,44,48,49,51,53,54,58,60,70,77,78,80]. Communication was deemed most effective when it was regular, frequent, and designed to foster awareness and support the engagement of a range of stakeholders, from local through national, regional and international levels. The MCMs, or the use of ICS, were often cited as important organizing structures for ongoing communication during a health event, supporting meetings, data collection and information sharing [43,48,58,60], underscoring the importance of starting conditions to support communications. Multiple methods of communication were reported including electronic communications, list-servs, contact lists and regular meetings; in many cases these were supported through existing MCMs [48,58] Monthly meetings [53,58,60] and establishing clear lines of communication [31,44,51,77,78] were reported as critical. These methods were supported by the use of a variety of methods and platforms such as press briefings, websites, television, newspaper, teleconferencing, listserv, available contact lists, local/ regional/ cross-border meetings and periodic reporting [44,45,49,53,58,62,77]. Additionally, leadership and management processes played a key role in supporting or challenging communication; high-level support, resource allocation, and use of good practices across an organization are foundational for good internal and external communication. Closely linked with communication was the reported importance of building shared awareness and diverse stakeholder engagement. Awareness included information sharing, education campaigns, jointly coordinated communications and public release of reports with all members of the network and with the public [31,38,44,45,52,54,55,61,70]. Engagement of diverse stakeholders before, during, and after the response was reported as essential; these stakeholders included community and local actors, national governments, intergovernmental organizations, and operating partners [45,49,[51][52][53]75]. To facilitate communication across stakeholder groups, Adams et al. [60] and Butler et al. [70] underscored the importance of transparent joint communications specifically between responders and community leaders for efficient and effective response. Butler et al. [70] further reported the success of joint interviews held with stakeholders to support shared understanding of response needs. Diverse partners, including foreign militaries, were reported to support foundational infrastructure that allowed other international partners to stay involved when supporting a response effort when they would not have been able to serve effectively on their own [39,71,79]. Common goals, common interests, and perspective sharing amongst stakeholders were reported to support an effective response to a health event [38,49,53]. Resource mobilization and allocation during an event, relies heavily on the starting conditions, as well as the communication, leadership and management during the process of collaboration. A number of authors pointed to the importance of being able to mobilize both the material and human resources. Once again, the involvement of diverse stakeholders, the use of MCMs and management systems such as ICS were attributed with the ability to draw upon existing resources. Processes characterized as successful included establishing a supply chain with standardized access, delivery, allocation and flow [34,41,68,80]. Human resource mobilization benefited from online recruitment [45] as well as reflexive Human Resource protocols to ensure that positions were filled and workers are incentivized and rewarded for participation [31,57]. Outcomes reported. Although the researchers created a code to capture reported outcomes of collaborative efforts, only a small number of authors reported outcomes of their collaborative processes. Outcomes were consistently missing or under-reported in the literature reviewed, and this is likely a result of One Health outcomes being difficult to characterize. The few reported included the outcomes of cost reduction and improved safety [34], decreased mortality [41], reduction in MRSA cases [31], increased stakeholder buy-in [45], and a report that multisectoral professional development opportunities resulted from the response [47]. However, implementation of M&E activities was one of the major gaps in the reports of One Health collaboration. The majority of articles reviewed never discussed the evaluation of either the process of collaboration or the resulting outputs or outcomes. This creates a pivotal challenge in understanding how to improve One Health operations. The authors noted that outcomes of collaborative efforts were consistently missing or underreported in the literature reviewed. Language in collaboration. Language used to describe One Health work continues to be a challenge when working across disciplines. Each discipline contributing, and specifically those authors reporting on these interactions, bring their own nomenclature and vernacular [36]. As also discussed in the limitations of this work, we encountered challenges in how authors reported on which entities were involved in the response to a health event. Organizations, sectors and disciplines were characterized in different ways, making is difficult to find a standard classifying system for the coding. Considerations for the evaluation of One Health. Despite an emphasis on the importance of iterative improvements to collaboration, the implementation of monitoring and evaluation activities was one of the major gaps in the reports of One Health collaboration. The majority of articles reviewed never discussed the evaluation of either the process of collaboration or the resulting outputs or outcomes. This creates a pivotal challenge in understanding how to improve One Health operations. It became clear in the literature review that there was no standard framework for how to evaluate One Health processes [12,36]. Although networks and collaborators such as the Network on the Evaluation of One Health are making important advances, practical evaluation tools are still needed [83]. Some authors from public affairs, such as Emerson and Nabatchi et al. [19] have proposed a framework for evaluating outputs, outcomes, and what they refer to as "adaptations" of collaborative processes [18,19]. Their work is one of the first to propose an integrated framework that captures collaborative results at all levels of the system, from the target population to the participating organizations, and the network as a whole. The results of this scoping review are intended to support the next steps in supporting One Health evaluation. --- A proposed framework for analyzing and reporting on One Health collaborations Using the 12 factors uncovered in this review, the authors have outlined a reporting framework that may help practitioners consider their activities in light of important collaborative starting conditions and process-based factors. The researchers propose this to the One Health community as a tangible next step that may lead to more effective reporting and potentially evaluation of One Health efforts in the future. The proposed framework in Table 9 recognizes that each factor will be operationalized within the context of the health event and that flexibility in reporting is imperative. This framework may be useful in providing a common language on how practitioners discuss and report on their One Health efforts. --- Lessons in collaboration from a transdisciplinary research team In the process of conducting this research, the research team encountered many of that same collaborative challenges as described in the articles reviewed. The research team had to negotiate and re-negotiate ways of working, integrate differing points of view and assign roles in ways to leverage expertise but not reinforce bias. Additionally, the researchers had to establish and meet internal standards while also achieving the outward facing objective of finishing the analysis and writing of this article. Finally, as with any transdisciplinary work, language was consistently a problem. The inherent challenge of interdisciplinary work is in how we talk about collaboration and the terminology we use to describe both theory and practice. For the research team, creating clear definitions supported the development of a common language. Differing approaches can be a significant barrier when active collaboration is not structurally supported, valued, and continuously monitored for health and effectiveness. Our efforts reinforce the need for training for those skill sets that fall beyond technical sector-specific training. When grappling with the question of which skills were most important in our collaborative process, we determined that the shared objective of collaboration was the foundation for our ability to integrate the differing expertise that each team member brought to the process. Simply, we took continual action to achieve our combined goal including reading new literature, considering new frameworks, learning new things, and asking many questions. The subsequent challenge is, of course, that there are very few formal opportunities to gain access to training around these competencies and mindsets in One Health teams. Most often, as in our case, it is an ad hoc process that rests on the motivations, shared values, and time available within a team to develop in this way. Our review suggests that, while this approach worked for us, it would not be a time or resource-effective approach within the context of a health emergency. Thus, One Health approaches need to be evaluated to help practitioners decide when and how to most effectively collaborate for their intended purposes. --- Conclusions Of the 2,630 article abstracts screened, only 179 met initial inclusion criteria and the full research articles were obtained. Of that subset of articles, only 50 discussed the successes, challenges and lessons learned from operational One Health response to a health event. A majority of the articles focused broadly on the need for collaboration between multiple sectors or disciplines with little attention to what factors enable an effective One Health response effort. The low number of included articles reflects a broader challenge for the One Health community, suggesting the necessity that One Health researchers move beyond discussing the inherent need for One Health, to actually reporting on the processes, outputs and outcomes of their collaborative efforts. As such, no consistent framework or language was found to report on the process, outputs or the outcomes of One Health work in the articles reviewed. In the analysis, the research uncovered 12 factors that supported successful health event response. The researchers were able to make important advances by characterizing these factors as important at the start of collaboration or relevant to the process of collaboration. Using these 12 factors, the researchers propose a One Health reporting framework which when used to report on One Health collaborations, can support the further refinement and identification of success factors for One Health. These factors may serve as the basis for developing evaluation metrics and the iterative improvement of One Health processes around the globe. This publication is made possible by the generous support of the American people through the United States Agency for International Development . The contents are the responsibility of the authors and do not necessarily reflect the views of USAID or the United States Government. --- All relevant data are within the manuscript . --- Code 1: Collaborative analysis-Success reported/ Actually happened during reported health event Code 2: Collaborative analysis-Challenge reported/ Reflection on what should happen during future health events https://doi.org/10.1371/journal.pone.0224660.t004 Code 1: Starting conditions of collaboration Code 2: Process-based conditions of collaboration Code 3: Outcomes of Collaboration https://doi.org/10.1371/journal.pone.0224660.t005
Advocates for a One Health approach recognize that global health challenges require multidisciplinary collaborative efforts. While past publications have looked at interdisciplinary competency training for collaboration, few have identified the factors and conditions that enable operational One Health. Through a scoping review of the literature, a multidisciplinary team of researchers analyzed peer-reviewed publications describing multisectoral collaborations around infectious disease-related health events. The review identified 12 factors that support successful One Health collaborations and a coordinated response to health events across three levels: two individual factors (education & training and prior experience & existing relationships), four organizational factors (organizational structures, culture, human resources and, communication), and six network factors (network structures, relationships, leadership, management, available & accessible resources, political environment). The researchers also identified the stage of collaboration during which these factors were most critical, further organizing into starting condition or process-based factors. The research found that publications on multisectoral collaboration for health events do not uniformly report on successes or challenges of collaboration and rarely identify outputs or outcomes of the collaborative process. This paper proposes a common language and framework to enable more uniform reporting, implementation, and evaluation of future One Health collaborations.
INTRODUCTION International reviews of the prevalence of fertility impairment, defined as women unable to achieve a clinically recognised pregnancy after attempting for more than 1 year, found that ranges from 7% to 39%, depending on the reproductive outcome assessed and the populations included-for example, all women trying to get pregnant for the first time or married women. [1][2][3] Women's age, sexually transmitted diseases, polycystic ovary syndrome, endometriosis and pelvic inflammatory disease have been considered the proximal causes of female fertility impairment. [4][5][6][7][8] Over the recent decades, several countries have shown a decrease in their total fertility rates . 9 However, there is no consensus whether it may result from a decline in biological fertility. Some authors report an increase in the ability to conceive explained by better social conditions and less sexually transmitted infections, 10 11 others a fertility decrease related to women's postponement of childbearing age and adverse lifestyles 12 13 while yet others have found no differences in trends. 14 Hence, in addition to the pathological factors related to the female reproductive system, socioeconomic circumstances could influence fertility through different pathways. --- Strengths and limitations of this study ▪ This study used a large sample of truly fertile women allowing the understanding of the different social realities. ▪ Not restricting the sample to planned pregnancies or to women attending fertility clinics, strengthened its external validity. ▪ Data on fertility impairment were collected retrospectively after birth and misclassification may have occurred. More highly educated women are more likely to postpone childbearing to an age when the probability of conception decreases and the probability of early pregnancy loss increases. 15 16 They are also more likely to plan pregnancy and to be aware of fertility problems, promoting the decision to seek for help. 17 However, less-educated women are more likely to be overweight, to smoke and to have more risky sexual behaviour which may negatively impact female fertility. [18][19][20] Despite the correlation between different dimensions of socioeconomic circumstances, their components may impact fertility by different mechanisms. Income allows easier or faster access to health services, namely infertility clinics and also to material resources as better food or service promoters of better health. 21 Occupation may also be related to infertility because of labour pressure, working schedules and psychosocial stress or because of the exposure to environmental pollutants known to decrease implantation rates and increase spontaneous abortion. 19 Aiming at understanding how social circumstances might impact female fertility decline, we designed a cross-sectional analysis of the association between socioeconomic conditions and the occurrence of female fertility impairment in women who had subsequently delivered a live birth. --- METHODS This study was conducted within Generation XXI, a population-based cohort of 8647 babies and 8495 mothers assembled between April 2005 and August 2006 from five public maternity units in the Porto Metropolitan Area in the North of Portugal. All resident women delivering a live birth with more than 23 gestational weeks were eligible. In all, 70% of the eligible mothers were consecutively invited and 8% of those refused to participate. Participants were interviewed face to face between 24 h and 72 h after delivery. Data were collected using a standardised questionnaire on maternal sociodemographics, obstetric and gynaecological history, planning and occurrence of the current pregnancy, prenatal care and lifestyles. 22 A subgroup of women was recruited in pregnancy when pregnant women went to their first hospital antenatal appointment at two of the included units . These interviews were conducted during each trimester and so, most data were collected at different stages compared to the rest of the cohort. 23 The study was approved by the ethics committee from the University of Porto Medical School/Hospital S. João and all women taking part were required to sign a consent form, designed according to the Declaration of Helsinki. For the current analysis women aged 18 years or more were eligible. We excluded 105 women who reported a diagnosis of male infertility in their partner and a further 119 who had non-spontaneous conception, including pregnancies resulting from assisted reproductive technology or after ovulation induction medication. Because of different timings of data collection 280 women recruited during pregnancy and 358 with missing data for fertility status, parity or the exposure variables were also excluded . --- Socioeconomic measures Maternal educational level, occupation and household monthly income were indicators of the socioeconomic circumstances. Educational level was recorded as the number of completed years of education and categorised as ≤6, 7-9, 10-12 and >12. Occupation was recorded using self-reported current job and daily tasks and classified using the National Occupation Classification , supplementary classified as ISCO-2008. Occupations were further grouped into higher level whitecollar workers , lower level white-collar workers , skilled blue-collar workers and unskilled blue-collar workers . 24 Household monthly income was recorded in €500 categories and was grouped as ≤€1000; €1001-€2000; >€2000 and those who said they did not know or who did not want to reply. --- Fertility status Women were asked whether they had ever tried to conceive for more than a year without success. Those who said they had were classified as having female fertility impairment and were asked how long they had spent attempting to get pregnant. This was categorised as 13-23; 24-35 and >35 months. Women were asked if they had ever sought medical help because they could not get pregnant. Those who had done so reported the medical diagnosis for their delay in conception. Women were asked if they had planned the current pregnancy and those who answered yes were also asked how long it had taken them to become pregnant. This was grouped into under 6, 6-11, 12-24 and over 24 months. --- Covariates Women were asked about their age at the time of birth, their marital status, age at menarche, regularity of their menstrual cycle and age at first sexual intercourse. Smoking 3 months before pregnancy was categorised as: never smokers , ex-smokers , smokers of 1-14 cigarettes/day and smokers of more than 14 cigarettes/day. The self-reported pre-pregnancy weight was used to calculate maternal pre-pregnancy body mass index . Height was measured whenever possible, otherwise data were copied from their citizen identity card. BMI was calculated as 'weight / ' and grouped as ≤24.9; 25-29.9; ≥30 kg/m 2 . Data on pregnancy screening tests for infections were retrieved from maternal pregnancy passports . Women were classified as having an infection if they were positive for syphilis , hepatitis B , hepatitis C or HIV. --- Data analysis Women's characteristics were analysed according to the fertility status and compared using χ 2 tests. Multivariate logistic regression models were used to calculate OR and their respective 95% CIs as measures of the association between each socioeconomic indicator and female fertility impairment, independently of age, pregnancy planning, pre-pregnancy BMI, smoking, age at first sexual intercourse, infection status and age at menarche. The final model includes only the variables that changed the OR by more than 10%. The number of previous pregnancies modified the effect of education on fertility impairment and so we stratified the analysis by the number of previous pregnancies: primigravidae and multigravidae . The possible interaction between age and education was also tested. The analysis was repeated including only women who had not sought medical help or infertility problems. For multigravidae, sensitivity analyses were performed excluding women with no previous successful pregnancy and those that reported current time-to-pregnancy over 12 months. --- RESULTS The final sample comprised 7472 women who were similar to the excluded participants in terms of socioeconomic indicators and BMI but were more likely to be younger and to be smokers . Among primigravidae, 7.7% had taken more than 1 year to conceive. The prevalence was 9.6% in those with a previous pregnancy. Within fertility impaired women, 39% of primigravidae and 35% of multigravidae had taken 3 years or more to get pregnant. Although not statistically significant, less educated women were more likely to report more than 3 years of involuntary childlessness . Seven per cent of women had sought medical help because they could not get pregnant and 71% reported a clinical diagnosis. Among impaired women, seeking for help was more frequent in the more educated , but no differences were found in fertile women. More detailed data on seeking behaviour and clinical diagnosis may be found in online supplementary table S1. As described in table 2, women trying to conceive for more than 1 year were older , less educated and less likely to be single but similar in terms of household monthly income , occupation or employment status . Women with fertility impairment were less likely to report having regular menstrual cycles, were more likely to have had the onset of menarche before 12 years or over 13 years of age and were more likely to have planned the current pregnancy. Among multigravidae, they reported more frequently to have had a previous adverse pregnancy outcome. Smoking habits were similar according to the fertility status. Overweight and obesity were found in 41% of women with fertility impairment but no statistically significant differences were found according to the self-perception of health status. Low-educated women were more likely than more educated to be overweight or obese and to report early age at sexual initiation . From figure 2 it can be observed that, among primigravidae, higher education level was associated with a decrease in female fertility impairment, independently of other demographic and behavioural characteristics. Compared to those with six or less years of education, having 7-9, 10-12 and more than 12 years of formal education was associated with lower odds of having infertility vs ≤6 years: 7-9 years: 0.85 ; 10-12 years: 0.34 ; >12 years: 0.24 , p trend <0.001). The results were accentuated in the analysis restricted to women who did not seek medical help ; 10-12 years: 0.18 ; >12 years: 0.11 , p trend <0.001). No significant association was found in the interaction between age and education. Among multigravidae, no statistically significant differences were found. However, the association between education and female fertility impairment seems to assume a U shape ; 10-12 years: 1.42 ; >12 years: 1.00 ; figure 2). Similar results were observed when considering only multigravidae with a previous live birth ; 10-12: 1.45 ; >12: 1.19 ) or within those for whom the current pregnancy was achieved in ≤12 months ; 10-12: 1.26 ; >12: 1.76 ). Income and occupation were associated with infertility within primigravidae but no association was found for multigravidae . More affluent primigravidae and women in higher level white-collar occupations had significantly lower odds of infertility. The differences were attenuated and no longer significant after adjustment for years of education which remained significantly associated with fertility with similar point estimates. --- DISCUSSION Among primigravidae who had recently delivered a live birth we found that higher education was associated with a decrease in female fertility impairment independently of age and other behavioural factors, but no other socioeconomic indicator was related. In women who had already experienced a previous pregnancy, neither educational level nor other social indicators were associated with this condition. The association between education and fertility impairment was previously described in other large population-based European studies among primiparous women, 11 18 and was explained by the effect of education on decreasing the exposure to adverse lifestyles, risky sexual behaviour and body weight. On the contrary, an increase in infertility with increasing education was also found in another study in the UK although the authors argue that it reflects the increasing recognition of the fertility problem among this group of women and does not result from a biological reduction in the ability to conceive. 25 Other studies in Denmark 26 and Scotland 14 found no relation between social class/education and infertility. These different results may reflect the huge geographical variations of socioinequalities in health 27 besides differences in the definitions and in the socioeconomic indicators used. Our results were only partly explained by variations in women's behaviours. It is known, and was also previously found for this cohort, that smoking and obesity are more frequent in socially deprived women. 22 Yet, our results may reflect reverse causation because women with impaired fertility might have been advised to adopt healthier lifestyles. However, the proportion of women seeking medical help because of infertility problems was small and the results were even of greater magnitude when excluding these women. The -observed attenuation may be underestimated because the study recruitment strategy was on live born children. In fact, the socioeconomic circumstances may affect the different stages of conception. Obesity and smoking status could be associated with delays in conception and also with early pregnancy loss. 19 28 29 It is known that early age at the first sexual intercourse may be associated with a higher risk of infertility because of the higher probability of sexually transmitted diseases. 30 Early age at the first sexual intercourse is also more frequent among least affluent women. In our study it did not entirely explain the association between education and fertility impairment, probably because this indicator may not have fully captured sexually transmitted infections. Similar was the situation when controlling for infection status although we do not have data for other infections such as Chlamydia trachomatis or Neisseria gonorrhoeae that are known to be associated with infertility. 7 Therefore, we cannot exclude the possibility that years of education were associated with decreased ability to conceive by means of other biological mechanisms as well as through different social attitudes towards motherhood. Also, maternal age could modify the association between education and fertility impairment assuming that fertility does not decline steadily with age and other pathological conditions may contribute to female infertility distinctively across women's reproductive life. 13 We had not found a statistical significant interaction between age and education but the effect of education seemed to be attenuated with increasing age . This may be a reason for the null effect found among multigravidae . Besides that, if education and the pressure of the labour market lead to the postponement of childbearing, it is also possible that better social conditions may increase the likelihood of having more than one child. 9 We could not know if fertility impairment had occurred in more than one pregnancy. Joffe and colleagues, in a multicentre analysis found that couples experiencing a previous history of infertility may be more likely to experience it in a subsequent pregnancy. However, other behaviours might also influence the reproductive history in the sense of not having another child. 31 These unmeasured characteristics hold back a clear understanding of which factors may have more impact on fertility impairment in women with a previous child. Educational level was the only socioeconomic indicator found to be significantly associated with female fertility impairment. Depending on the setting, each indicator embodies distinct dimensions of social status and these may differ in their effects on individuals' health. In young adulthood, education seems to be the factor most strongly associated with health 21 as it happened in our sample. Although occupation and income were also associated with impaired female fertility, the estimates were no longer significant when controlling for education. However, these social dimensions seem to be highly correlated and the models may be overadjusted. 32 Also, some occupations may be associated with environmental exposures related to fertility impairment but we were unable to identify these. 33 This study used a large population-based sample of mothers who delivered a live birth. Sampling only truly fertile women reduced bias as it excluded sterile women who will never be able to conceive and for whom factors leading to female fertility impairment may be different. 7 It should be borne in mind that male reproductive impairments are among the possible causes of the current trends in fertility. 34 We have excluded participants whose partners had a clinically recognised male cause of infertility but we cannot completely rule out male factors for infertility. Only 63% of women with impaired fertility sought medical help and among these, a clinical diagnosis for infertility was provided for 71% of women. Therefore, we acknowledge that we might not have totally dissociated the women causes from male or even the couples' causes along with the effects of shared risk factors. Studies show that the risk of infertility is higher if both partners present obesity 35 and if both are older. 36 For a subgroup of primigravidae for whom we have data from the father at the moment of birth we conducted the same analysis adjusting for the shared overweight/ obesity and age over 35 years: the relation between education and infertility did not significantly change . We asked women about delay in conception ever in their life. Because of that, it is possible that apparent fertility impairment was related with previous partners' male infertility. But the proportion of previous pregnancies from the same father did not differ between the groups and is less likely to have influenced our results. This study was not restricted to planned pregnancies or to women attending fertility clinics, strengthening its external validity. Inequalities in seeking help for infertility treatment have been observed elsewhere but are not universal. 25 37 More educated women tend to look for infertility treatment sooner, achieving earlier pregnancies and with higher probability of success, possibly resulting in a misclassification of their fertility status. Although we have found this educational pattern among women with fertility impairment, it was not observed in fertile women. Consequently, misclassification is unlikely to have occurred. On the contrary, less educated impaired women might have not sought for treatment and, if they had not achieved a successful pregnancy, they would not have been included in this study. If they had, this would have increased the differences that we found. Because of different timings of data collection, we have excluded a subgroup of women recruited during pregnancy. We found these women to be less educated than those included in the current analysis. However, assuming that we have correctly estimated the association between education and fertility impairment, the exclusion of this group decreased the power of the current study to detect real differences and did not bias the results. Female fertility impairment was collected after birth and misclassification may have occurred. However, if misclassification occurred and if it was differential we expect less educated women to be more likely to ignore/under-report their fertility status. 38 If so, even greater socioeconomic gaps would be observed. This study shows that social circumstances, particularly education, might be important in understanding patterns of fertility impairment. Their impact seems to depend on the previous reproductive experience. Among first-time pregnant women, infertility decreased with increasing education. This relation was not totally explained by other sociodemographic and lifestyle characteristics that have been previously found to be important to disclose this relation. --- , 2024 by guest. Protected by copyright. http://bmjopen.bmj.com/ BMJ Open: first published as 10.1136/bmjopen-2013-003985 on 2 January 2014. Downloaded from Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement Extra data is available by emailing Sofia Correia --- Competing interests None. Patient consent Obtained.
Objectives: This study aimed to assess the association of socioeconomic conditions with female fertility impairment among women who delivered a live birth. Design: Cross-sectional analysis. Setting: Population-based birth cohort (Generation XXI) assembled in 2005/2006 from five public maternity units in Porto Metropolitan Region, Northern Portugal. Participants: 7472 women aged 18 or more with spontaneous conception and no male diagnosis of infertility were recruited and interviewed immediately after birth with structured questionnaires. Exposures of interest: Maternal education, occupation and income were recorded as proxy indicators of social conditions. Outcome: Impaired female fertility, defined as women who had unsuccessfully tried to conceive for over a year.were fitted to estimate the association between each socioeconomic indicator and impaired female fertility, stratified by previous pregnancy experience and adjusted for age, pregnancy planning and behavioural characteristics. Results: Among primigravidae, 7.7% (95% CI 6.8% to 8.6%) presented impaired fertility and the prevalence was 9.6% (95% CI 8.7% to 10.6%) in multigravidae. In crude analysis, we found women with impaired fertility to be older, less educated, more likely to have planned the current pregnancy and to be overweight/obese; they had similar levels of income or occupation. In multivariate models, a significant independent association between educational level and female fertility impairment remained among primigravidae (OR (95% CI) vs ≤6 schooling years: 7-9: 0.85 (0.54 to 1.34); 10-12: 0.34 (0.21 to 0.54); >12: 0.24 (0.14 to 0.40), p trend <0.001) but not in multigravidae.This study shows that education might be important in understanding female fertility impairment, particularly among first-time pregnant women. It also points out that the association is not totally explained by other sociodemographic and lifestyle characteristics that have been previously found to be important to disclose this relation.
Introduction Migration both across and within international borders has become a global phenomenon, impelled by contemporary globalization that is characterized by intensified global connections, and flexible labour processes [1,2]. The International Organization for Migration estimates that 214 million people have migrated across national borders, 49 percent of whom are women [3]. Women migrants are more likely to be engaged in poorly waged, casual labour than their male counterparts [1]. In addition to international migrants, there are many thousands who migrate within their countries, usually to larger cities. Individuals and families are pushed to migrate for factors such as war and natural disasters, but a significant number migrate for economic reasons, searching for viable options to support themselves and their families. This is the case for the young women in Southeast Asia who are the focus of this study. While they were raised in the countryside of their respective countries, these women moved to the cities for employment, recognizing that their options to make a living at home were few. Along with thousands of others, these migrant women work as beer promoters in restaurants, karaoke parlors, and beer shops in the large cities of Southeast Asia [4]. In recent years, many young women would have found work in manufacturing, most specifically in the lowest skilled and lowest salaried jobs located at the bottom of global supply chains [5]. Since the global financial crisis in 2009, however, the manufacturing sector has undergone a rapid decline. In Cambodia alone, where 20 percent of young women were employed in the garment industry, tens of thousands of workers have been laid off or have had their wages and working hours reduced [6,7]. Concomitantly, neo-liberal measures undertaken by national governments have resulted in the erosion of public health and social services and the implementation of a range of user fees, including user fees for health services, that together inhibit access by the most vulnerable, primarily women and children. As Desjardins [5,8] noted: "Workers in precarious employment, without economic and social entitlements, and without long-term career prospects or equipped with few skills are more vulnerable to risks of unexpected economic downswings, job and wage losses than other workers." Indeed, as Petchesky succinctly summarized [ [8], p. 140]]: "Women pay for the cumulative social deficits of global capitalism". It is in this context, that the young Southeast Asian women who are the focus of this study have left their rural birthplaces to seek employment in urban settings. Recognizing that their options to make a living at home were few and with the disappearance of many manufacturing jobs, these migrant women work as beer promoters in restaurants, karaoke parlors, and beer shops in the large cities of Southeast Asia. The purpose of this research is to assess the facilitators and barriers to reproductive health care services for this population of women using a mixed methods research design. --- Background Beer promoters are employed by beer companies or local establishments to market particular brands of beer to customers. In 2004, the International Labour Organization found that in Phnom Penh alone, 24 brands of beer were being promoted in this manner and over 4,000 women worked as beer promoters throughout the Cambodian capital [9]. Many companies contract young women based on their appearance, and most compel their workers to wear tight, revealing clothing that many find immodest [10]. Many beer promoters work in whole or in part on commission; therefore, keeping the customers satisfied is essential to maintaining their income. Resultantly, beer promoters contend with daily sexual harassment and with frequent demands to drink with their customers. In one survey, 15 percent of respondents reported being asked by their employer to engage in sexual relations with a customer [10]. In some Southeast Asian countries, beer promoters are categorized as "indirect sex workers," and in all four countries included in our study, public discourse generally stigmatizes the work of beer promoters, relegating them to the category of "bad girl" [9,10]. While the prevalence of Human Immunodeficiency Virus and Acquired Immune Deficiency Syndrome is declining globally, it still remains a significant risk for beer promoters throughout South-East Asia. According to the Joint United Nations program on HIV/AIDS in Thailand the prevalence of HIV amongst indirect female sex workers, is 1.7 percent . There is no HIV data on indirect sex workers for the other three countries, however, in Cambodia, despite declines in the HIV prevalence generally, the infection remained at over 14 percent amongst female sex workers in 2006, while in Vietnam where the epidemic is currently occurring mostly through sexual transmission, the prevalence amongst female sex workers is 3.2 percent [11]. In Laos the HIV epidemic is fortunately limited, however, the population is young: 60 percent of the total population is below 25 years of age [12]. With rising rates of sexual activity amongst youth, there are significant risks of HIV spread. Access of beer promoters to sexual and reproductive health care services was deemed an important priority at an October 2009 research meeting of academics, government and non-governmental organization staff, beer industry representatives, and beer promoters from Cambodia, Laos, Thailand, and Vietnam and was subsequently confirmed by focus groups with beer promoters [13]. Lack of time to access services, cost and availability of services, health care provider stigma, and shyness of the beer promoters were all factors impacting on access to reproductive health care services for these women. Gaining access to primary health care services, including sexual and reproductive health care, is complex. Wellstood and colleagues [14] have written how access to primary health care is dependent on individual characteristics of the user such as income, age, gender, and level of need, as well as system characteristics and the policy environment. In their analysis, economic factors, geography, availability of services, and socio-cultural issues are all key elements of access. While Wellstood's research took place within the Canadian context, there is evidence from elsewhere that that this conceptual framework applies to wider populations. For example, researchers have found that factors at the individual, relationship, community, and structural level , impact access to HIV prevention for migrant women [15]. Employing this conceptual framework, our research has attempted to identify the individual, system, and policy factors impacting access to sexual and reproductive health care services for migrant beer promoters in Cambodia, Laos, Thailand, and Vietnam. --- Methods --- Research team Our research team emerged from a research agenda meeting held in October 2009, in Phnom Penh, Cambodia. The research team consisted of the Canadian principal investigator and co-investigator in addition to a co-investigator from each of the four research sites and a research co-ordinator. Each Asian co-investigator led a team with a country research manager and two research assistants. The country research manager was responsible for helping organize the research locally and for supervision of the two research assistants. In all countries, at least one of the research assistants had experience working as a beer promoter and all research assistants were female. This was felt to be important as the beer promoter research participants were likely to feel more comfortable speaking with other women who shared the status and challenges stemming from their work as beer promoters about matters of their sexual and reproductive health. Our research co-ordinator was responsible for liaising with all teams on the research process and data collection, and for organizing the training meeting and final knowledge translation meetings. --- Research sites The research was conducted in the four capital cities of the participating countries: Phnom Penh, Cambodia; Vientiane, Laos; Bangkok, Thailand; and Hanoi, Vietnam. --- Research design This research project was a two-phase participatory mixed methods study. The first phase consisted of qualitative focus groups with beer promoters and key informants while the second phase was a survey of beer promoters and case studies of health care institutions. The case studies are published in a separate paper [16]. This research design was an exploratory sequential design, in which the qualitative first phase was used to explore the issues and inform the development of the quantitative survey tool used in the second phase [17]. Ethics approval for both phases of the research was obtained in Canada, in addition to individual ethics boards in Cambodia, Laos, Thailand and Vietnam. As an international research team, we were cognizant of the diverse ethical issues of researcher identity in this global health research [18]. In accordance with feminist and participatory approaches to research [19,20], we hired local beer promoters whom we trained as research assistants to both promote skills transfer and to narrow the gap between research participants and data collectors. In the first phase of the research, focus groups were held with beer promoters and separate focus groups were held with key informants. Focus groups are particularly useful as a data collection strategy to explore research participants' views on a subject [21]. The purpose of these focus groups was to understand the barriers and facilitators of access for beer promoters to sexual and reproductive health care. The focus group questions addressed the individual, family, institutional, community, and policy factors that may impact access to sexual and reproductive health for beer promoters. A minimum of four focus groups each consisting of about eight to ten beer promoters were recruited through snowball sampling. Focus group participants were paid $10 US for their participation. This amount was deemed appropriate by local researchers as a stipend to replace lost wages, and not so large as to be coercive [22]. In addition, each site hosted a minimum of four focus groups of key informants. Key informants included health care providers working in institutions providing sexual and reproductive health care to beer promoters in the government, non-governmental, and private sectors in addition to policy makers in the government and senior non-governmental organization staff. The recruitment of key informants was purposive, with the goal of having a balance of health care providers and policy makers. If it was not possible to schedule focus groups with important key informants due to either time constraints or their personal preference, individual interviews were conducted instead. All focus groups were held in the local language and were recorded. Where recording was not permitted detailed notes were taken. The recordings or notes were transcribed by the research assistants and translated into English. All names and identifying information were removed from the transcripts before translation. Translation was done by the local research team members who were present during the focus groups to minimize errors [23]. The local co-investigators provided the English transcripts of the focus groups to the Canadian researchers. The two Canadian investigators analyzed the original transcripts using N-Vivo version 8, qualitative software for both content and thematic analysis. In phase two, the draft beer promoter survey was modified based on the findings from the first phase of the study [21]. All co-investigators reviewed and approved the final draft. The English version was translated into Khmer, Laotian, Thai and Vietnamese and then back-translated into English. All back translations were reviewed by the Principal Investigator and revisions were made by the country research teams. The goal was to survey 100 beer promoters by each of the four country research teams. The sample size was determined by an estimated population of several thousand beer promoters within each capital [13], and a desired precision of at least 10 percent [24]. The survey questions focused on demographic information about the beer promoter, factors affecting their choice of health care institution, their health care experiences and their behaviors. Surveys were conducted at popular health care institutions frequented by beer promoters. Beer promoters were paid $5 US for their participation in the survey. In addition to the survey, each country research team conducted the case study of the health care institutions used for the survey. The case studies focussed on the services offered at the institution, provider training and skills, and facilities at the institution. The country research teams entered all the data from the surveys into an Excel spreadsheet. Template Excel coding tables were provided to each country coinvestigator by the principal investigator. The Canadian PI was responsible for the final statistical analysis of the data. Using SPSS, data were analyzed for descriptive similarities and differences between countries and for trends in findings. The completed case study surveys were returned to the Canadian PI and a summary of the results was prepared for comparisons between countries. --- Consent Written consent was obtained from all research participants for anonymous use of their quotations in publications. --- Results The phase one results were collected from July 2010 to November 2010. The number of focus groups with beer promoters and focus groups or interviews with key informants for each country are listed in Table 1. The phase 2 survey of beer promoters was conducted during February to April 2011 in each of the four research locations. The types of institutions used to conduct the survey and the number of beer promoters surveyed in each institution are listed below in Table 2. The total number of beer promoters surveyed was 390. The results will be presented in the following manner: the demographic description of the survey data will be followed by a summary of the health behaviors of the beer promoters. Thereafter we include a discussion of beer promoters' experiences with, and their preferences for, health care institutions. The quantitative results will be illustrated by qualitative examples where the results concur, and with contrasting examples where they differ. While the qualitative focus group data were collected before the surveys were conducted, we have chosen to present and discuss the results together in order to avoid redundancy and for purposes of comparison of both forms of data. Differences between countries will also be illustrated, though the numbers were not large enough to demonstrate statistical significance. --- Summary of demographics of survey population The demographic details of the beer promoters who participated in the survey are illustrated in supplementary file Table 3. Of the women surveyed, well over half were under 25 . The mean age of the participants was 24.2 years. There were some differences between the demographic variables in the four countries. The Cambodia and Vietnamese participants were older than the Thai and Lao participants . Similarly, the majority of Lao and Thai participants had never married and were childless while only one third of the Cambodian participants were without children. Forty percent of the Vietnamese participants had one or more children. Not surprisingly as they were younger on average, the Lao beer promoters had migrated most recently and worked as beer promoters for the shortest time , while the Cambodians had migrated earlier and worked the longest as beer promoters . The majority of the women surveyed from all countries worked as fulltime beer promoters . The most common locations worked varied by country: karaoke was the most frequently cited location for the Vietnamese participants , while most Thai and Cambodian beer promoters worked in restaurants . The beer shop was the common location for the Lao beer promoters who were surveyed . Almost three quarters of the beer promoters were financially supporting at least one family member, while over 20% were supporting five or more members. This was less of a trend for the Thai beer promoters: over half of these women did not support any family members. With the exception of Thailand, most beer promoters lacked health insurance. Only a quarter had some insurance in Cambodia, while over 90% lacked health insurance in both Laos and Vietnam. In Thailand, where 97% of respondents had insurance, employers were primarily responsible for its provision. The beer promoters surveyed accessed reproductive health care in a variety of locations as noted in supplementary file Table 3. The most common reason for accessing service was for vaginal discharge, followed by menstrual problems, the need for contraception, testing for reproductive health problems, abortion services and prenatal care. --- Summary of health behaviors The health behaviors of the beer promoters in the survey are listed in supplementary file Table 4. The women commonly reported having sex outside of their primary relationship. While only 18.2% admitted to frequent sexual relations outside of their primary relationship , a further 34.1% stated they had extramarital sex at least once per year. These sexual activities were common amongst all four nationalities of beer promoters. The beer promoters were questioned about sex work on 3 different items "sex with clients" , "receiving money for sex" , and "sex with men to supplement their income" . Over one third denied any sex work on these three items . The frequency of women who admitted that they "sometimes" or "often" engaged in sex work was generally higher: responses of "sometimes" or "often" on the three items addressing this issue were 47.7%, 52.7% and 42.5% respectively. Sex work was most common in beer promoters from Laos and Vietnam, and least common amongst the Thai beer promoters. More than half the survey participants from Cambodia, Laos and Vietnam and 11% in Thailand had had an abortion at some point in their lives. . The beer promoters surveyed were questioned about drinking beer with clients and getting drunk at work . Drinking beer with clients was common amongst all beer promoters except those from Thailand. Women reported "sometimes" or "often" drinking beer with clients: 70% in Cambodia, 91% in Laos, 77% in Vietnam and only 17% in Thailand. Similarly, reports of getting drunk at work "sometimes" or "often" were highest in Laos and Cambodia followed by Vietnam and Thailand . About one third of the beer promoters reported that they "sometimes" or "often" knew or suspected that they Injection drug use was uncommon in all countries: only 7.2% of women admitted to "sometimes" or "often" using injection drugs and its use was most common in the Cambodian cohort . When questioned if their work as beer promoters provided enough money to live on about one fifth stated that it "never" did though again the responses varied by country from Cambodia and Thailand to Vietnam and Laos . --- Factors affecting choice of health care institutions Table 5 documents the beer promoters responses to what issues are important to them in their choice of health care institution for sexual and reproductive health care. --- Barriers and facilitators to accessing sexual and reproductive health services The potential barriers or facilitators to accessing sexual and reproductive health care services for beer promoters in these four Southeast Asian capital cities can be categorized under three major conceptual structures: institutional factors, work factors, and personal factors. --- Institutional factors There were several key factors that were common barriers preventing access for beer promoters to the institutions providing sexual and reproductive health care services in these four Asian capitals. These include financial barriers, location/ transportation issues, the environment of the institution, and service factors. --- Financial barriers and health care insurance Cost of the health care services was an important issue for these beer promoters as their financial resources were often stretched very thin. In the survey, 71.8 percent agreed or strongly agreed that cost was a very important factor in choice of health care institution . One third to one half could not afford to go to the health care institution that they preferred . The Cambodian and Vietnamese beer promoters quoted below reflect on the challenges they had balancing their own health with the financial needs of their families. Purchasing health care services meant giving up funds for other important requirements such as food, rent and family support. As a result, they would often avoid costly health care services. If I go to [NGO name], I would like to use the Norplant and they wrote on the board that cost $50. And I do not have ability to pay for that because it is very expensive. For the beer seller like me, I have only the money to pay for the house rent, eating, send to my children and spend on other things. I do not have money for that. You know that we come here to work to earn money and send money home, if we go to doctors, we don't have money for the time we go and we also have to spend money for doctors. . . Only when I have serious disease, I go to doctors because going to doctors takes much money. In addition to regular fees of some of the health care institutions, this beer promoter from Laos noted that to get good service, sometimes gifts need to be provided to the doctors and nurses. As a poor beer promoter from the countryside, she was both unaware of this practice and unable to afford it. Big city people know what to do when they go to the hospital. They always give doctors/nurses some gifts and they are looked after very well. People from countryside don't quite know the system and they also don't have extra money so they don't know what to do. The beer promoter from Laos quoted above was somewhat unique in her experience among her cohorts from her country. Only 4 percent of Laotian beer promoters agreed or strongly agreed that money or gifts improve the service of health care providers , though overall about one third of the beer promoters did agree with this statement. Financial issues underpin the hierarchy of resort [25], the progression of strategies used to resolve their health problems, most commonly shared by respondents. For most, the first step was the local pharmacy where they can obtain inexpensive medication without a physician's prescription. If the medication did not adequately address the problem, the beer promoter resorted to the next, generally more expensive and more technologically sophisticated service, continuing until her needs have been satisfied. As one Laotian informant shared: First, I go to the drugstore and buy medicine without prescription followed by using clinic service and finally going to the hospital if the health problem is still there. As noted earlier, with the exception of those from Thailand, only about one third of the beer promoters had access to health insurance. For a few Cambodian and Vietnamese women, the employer provided this benefit: For my company, they have insurance. If we are sick, but not serious condition, we can go to the company clinic. If we are serious, we can go outside and have a receipt for them. . If you work for long time at a permanent job like us, you will have contract and you will have health insurance. The majority of beer promoters in the study, however, had no health insurance either because it was not provided by their employer or because they were part-time employees and it was not available to them. Government insurance was accessible for some Thai and Vietnamese beer promoters, but as migrants to the city it was not easy for them to use this insurance. Public health insurance schemes in those two countries are linked to service provision in residents' natal communities, limiting its utility under conditions of migration. Interviewer: Do you know about voluntary health insurance? Patients with voluntary health insurance only pay 5% to 10% of their hospital fees. Respondent: Of course we know, but it requires a permanent address here while all of us are migrants. We only can buy health insurance at our hometown with a permanent address and we have to register at one hospital near our home to be examined. For the beer promoters with health insurance, sometimes the health care institutions that were available to them were not desirable: The social security hospital is not providing good services, and doctors don't care about us, long waiting. Actually we selected the clinic that is close to our home, convenient and clean. The Vietnamese key informant quoted below noted that while the government had policies to provide health care insurance, many beer promoters lacked labour contracts and thus were not protected by these laws and policies. Most girls doing massage services, or working as beer promoters do not have labor contracts so they do not have social insurance or health insurance. Therefore, they do not have the opportunity to be examined in health centers or hospitals although our nation has laws to require these units to send their staff to health centers and have periodic health examinations. Besides, our nation also has laws to require heads of companies or restaurants, etc. to buy health insurance and social insurance for their staff but they usually don't buy so if beer promoters want to have their health checked, they must pay money themselves. Thus, the cost of health care prevents beer promoters from accessing reproductive health care services. While health insurance would improve access, the complexities of government policies and lack of enforcement makes it difficult for poor beer promoters from the countryside to access health care insurance even if it exists. --- Location/transportation factors The location of the health care institution is often a key factor to whether or not beer promoters decide to access the services. In the survey 72.3 percent stated that location was important in their choice of health care institution, and the majority of beer promoters from all four countries held this opinion . In addition, almost a half of all the beer promoters and 74 percent of the Laotian beer promoters in particular agreed or strongly agreed that the cost of transportation was a key factor in accessing health care . Many beer promoters chose to seek care in institutions that are close to home as they are hesitant to spend money on transportation. This Thai beer promoter reflects on her priorities in choosing where to obtain health care: Similar to the others, I usually go to a clinic because it is close to home, convenient and has quick services. Location and lack of transportation are an issue for this beer promoter in Laos. She felt she was better off in the countryside. I don't have a vehicle so it is very difficult for me to go to the health care provider. I am also shy to discuss my health problems with other people. I don't have permanent address or even an identity card. It was not this difficult in my home town. While convenience of location was a factor for beer promoters from all countries, some beer promoters were fortunate enough to be serviced by clinics that provided free or subsidized transportation. This transportation facilitated care and health education for some women: That hospital always comes and collects all of us to receive the treatment and to understand about what we can do to take care of our health. --- Environmental factors In addition to financial barriers and location/transportation issues, the environment of the clinic had an impact on accessibility to reproductive health care services. In particular, the beer promoters needed quick appointments, and often avoided health care institutions with long waits. In the survey, 44.7 percent of the beer promoters agreed that they would go elsewhere if there was a long wait . This varied from one third of the group in Laos to over a half of the Thai beer promoters. When questioned again if they would be willing to wait more than two hours for an appointment , 43.1 percent stated they would not mind . Waiting was a common experience for beer promoters. Generally, the private health clinics had shorter waiting times than the government hospitals. The government hospital is very crowded and it has a long waiting time; almost 50 people before me. . Another key factor for choice of institution was the perceived cleanliness: eighty four percent of the beer promoters agreed or strongly agreed that cleanliness affected their choice . Some beer promoters preferred the private clinics as they were thought to be cleaner. This Vietnamese key informant summarized the strengths of the private clinics for beer promoters: Private clinics are open in order to earn money from patients so they usually meet all demands of patients such as fast, convenient, clean, friendly doctors and nurses. Confidentiality was considered "very important" for 84.4 percent of surveyed beer promoters; only 5.4% disagreed or strongly disagreed . This Vietnamese beer promoter felt that the shorter waiting time and increased confidentiality of the private clinic was worth paying for: In private clinics we have to pay more money, but we don't have to wait for a long time and information is kept in secret, nobody knows who we are. Not all agreed about the benefits of private clinics. This beer promoter from Laos preferred the quality of service and anonymity of the hospital: I am not confident to go to clinics. It seems okay during the medication, but the symptoms come back later. I also feel shy to tell doctor at clinic my problem person to person. Without witness, who will be responsible if something goes wrong? I am also afraid that I will meet the doctor from clinic in public and of course I will be recognized with my health problem. At hospitals, there are many patients and I am quite positive that the doctor will not remember my face as he/she sees many sick people every day. In summary, the environment of the health care institution affects access of beer promoters to reproductive health care services. In particular, waiting times, cleanliness and confidentiality are important factors to these women. --- Service factors There are several factors related to the service provided by the health care institution that have an important impact on access to sexual and reproductive health care services for these women. One of the most significant was the friendliness and attitudes of the health care providers. More than 85 percent of the beer promoters felt that friendliness of the providers was an important issue and few beer promoters disagreed . Over 85 chose the health care institution based on the caring attitudes of the providers . Health care provider skills were also perceived as important by over 84 percent . Health care provider explanations were deemed important by close to 85 percent . Female health care providers were preferred for reproductive health exams by over 70 percent , though this varied by country with Cambodian beer promoters having the strongest preference , and Thai the least . Conversely, Cambodian beer promoters were more likely to agree that male providers were more skilled than female providers, than beer promoters from other countries . A minority of beer promoters agreed that they had been treated badly by health care providers in the past , though this varied from six percent in Laos to 40 percent in Thailand. It was not unusual to be shouted at, or treated with disrespect by health care providers. Some of the beer promoters perceived this was worse if they were dressed in their working clothes, as the health care providers stigmatized them for their work and their lower social status. I have to say that we are beer promoters, we work in this environment, but we are also people. We also know how to hate, love, be angry, etc. And we get diseases or not, we are still people. Some doctors, they say and behave to us very softly and gently so that even when I don't want to be checked, I let them examine me. But there are also some doctors, truthfully, we can't smell them. They shout at us "examine or go home" or "wait out there". In this situation, even if I have disease, I will not seek examination or treatment. The doctors at [Name of Institution] said bad words. The doctor said that when we have sexual relationship, why don't we call them to see? This last example demonstrates the deep stigma some providers have against women who may be involved in sex work. This stigma was not unique to Cambodian health care providers. The Vietnamese key informant quoted below described how hospital training sessions are organized to improve communication and staff attitudes, but are not always effective. I know some hospitals organize training sessions for their staff about communication ways to patients and their attitudes, behaviors to patients and patients' relatives but most of these training sessions are not wellorganized and staff usually come there to chat, they don't pay much attention to the topics of the session. This leads to bad attitudes of some doctors to patients in hospitals. Many of them shout at patients and disregard patients. Besides, it is also noted that a small number of doctors receive extra money from patients that if patients don't give them money, they will not behave and examine them well. These things make people have bad opinions about doctors and so make beer promoters -sensitive people -be afraid of going to doctors. Another very important service that facilitates access to health care services for beer promoters is evening or weekend clinic hours. The beer promoters are often not able to go to the clinics during working hours, and many work late and are not willing to get up early to wait in a clinic to be seen. This Vietnamese beer promoter summarized her and her colleagues' desires for health care institutions: Of course, we always hope to have a centre with cheap price, convenience, the open time is from morning till midnight because sometimes it is difficult for us to arrange time for going to doctors. Besides, doctors must have good expertise and warm attitude to welcome us. In addition to improving staff attitudes and after hours availability, several beer promoters commented on the availability of supplies and medication as an issue. The hospital which is near my house does not have enough materials, but the far one demands more money. I just want them to have enough materials when we go. I don't want to move to another hospital. Finally, Cambodian and Laotian beer promoters noted that some NGO health care institutions provide incentives to encourage beer promoters to avail themselves of services. For example, one Cambodian NGO gives prizes to women who attend their clinic. A Laotian NGO provides beauty services and games for the beer promoters to make them feel more comfortable. These places encourage beer promoter clientele and thus facilitate their attendance for their health care needs as well. Thus staff attitudes, clinic opening times, and medication availability can act barriers or a facilitators for beer promoters to access sexual and reproductive health care services. Another potential facilitator of service access is incentives to attract beer promoters. In addition to these institutional factors, however, the challenge of taking time off from work impacted on beer promoters access to health care services. --- Work factors One of the most common barriers to accessing reproductive health care services for beer promoters reported by both beer promoters and key informants was lack of time. Over one third agreed that they could not get time off of work to access health care . This varied from 25 percent for the Laotian beer promoters to 45 percent of those from Thailand and Vietnam. Many found that they were discouraged by their employers to take time away from work in order to seek health care. For example, this Cambodian beer promoter pointed out that the owner would cut their pay, and may not believe them if they requested time off to see a health care provider: When we want to have our health checked up, the company owner does not have time for us. If we want to go out, they will cut off our salary from 5 or 3 dollars. Sometimes we want to go to get the health care services, we ask for permission. But they think that we tell lies and we want to go somewhere else. A Thai key informant noted that the working hours of beer promoters also prevent them from seeking health care: They don't have enough time: they are working in the night time and wake up very late so they can't go to the hospital. Not all beer promoters agreed that time was an issue for them. One Laotian beer promoter stated: Time is not a problem. We are allowed to go to health care services anytime we want to. A key informant who also worked as a health care provider in Laos noted that this was not the case, however, for the beer promoters she treated: The beer promoters have a very limited time to visit my clinic for treatment in sexual and reproductive health as they spend much of their time working. In Vietnam, one key informant summarized: Beer promoters and sex workers are managed by their employers about time, if they go to health centers or hospitals, it will take them lots of time and this doesn't please their employers. Thus in general, time was an important factor associated with work that limited beer promoters from all four countries to access sexual and reproductive health care services. There was not universal agreement about this, but for many beer promoters, taking time off of work for their health was not encouraged by their employers. Many also did not want to lose income by taking off time from work to seek health care services. --- Personal factors There were several personal factors that beer promoters from across the region experienced that affecting their willingness to seek sexual and reproductive health care services. These can be characterized as shyness and fears, lack of knowledge, and support from family and friends. --- Shyness and fears Women from all four countries stated that sometimes they avoided getting reproductive health care services because they were "shy". Of the beer promoters surveyed, close to a third agreed that they avoided care because of shyness , though in Thailand the shy women were close to half the cohort at 46 percent. Over 22 percent of the beer promoters agreed they avoided being examined because they were afraid . In the focus groups, "shyness" appeared to have a variety of meanings including a fear of being examined, a fear of others seeing the beer promoter in the clinic and a fear of the equipment itself. Below are several examples of how the women describe their shyness and fears: But some of us are shy and do not want to let the doctors to check us, even me for the first time. They are afraid to let the doctors to see their vagina. Most women/girls are too shy to go to the hospital. They buy medicine from the pharmacy to take hoping to stop their pain without consulting a doctor. Interviewer: Why do you think that women don't go to see the doctor? Respondent 1: Shy. Respondent 2: I was afraid also. If some things happen, I cannot tolerate. Respondent 3: I was afraid of the medical equipment. While a few of the key informants who worked as health care providers stated that the beer promoters who came to see them were comfortable sharing their health problems, this Vietnamese key informant noted that: They [beer promoters] usually hesitate going to big hospitals as they are afraid of seeing acquaintances there and afraid other people may recognize their identity, so they often go to private clinics or health centres which are far from their working place and they usually come there at the time they think there are a few customers. Thus shyness and fears of examination and being recognized by others are common themes amongst the beer promoters in these four countries. Another significant personal factor affecting their access to reproductive heatlh care is lack of knowledge. --- Lack of knowledge Beer promoters lack knowledge of both their sexual and reproductive health needs and the services available to them. This is sometimes a barrier in accessing health care services. Key informants are particularly aware of this knowledge gap. Most of the women come to this clinic because of sexually transmitted diseases; I think they should learn how to practice safe sex behaviors. They do not go out with their clients all the time, but they have their boyfriends and are not using condoms. Also they lack the knowledge about family planning. It is easy for them to get pregnant because they are still young and sexually active. I think barriers to sexual and reproductive health care services of beer promoters are their limited knowledge of sexual and reproductive health care. They also don't know much about these services that hospitals provide. Due to limited knowledge, they are easily influenced by other beer promoters, and they can follow advice of their colleagues. They also may not know how to prevent diseases, especially sexually transmitted diseases. --- Support of family and friends Recommendations of friends was an important factor in choice of health care institution for close to 60 percent of the surveyed beer promoters . However, in the focus groups there was some discussion amongst the beer promoters about the cooperation of their partners and the support they received from their families when they were ill. Many did not inform their families about symptoms or seeking care unless they were very unwell. Families generally encouraged them to seek health care. Partners may encourage the woman to seek care while avoiding treatment themselves -unless the partner has many symptoms. The Cambodian beer promoter quoted below commented on the challenges beer promoters experience, due to the lack of cooperation from partners. One problem, sometimes her family pushes her to see doctor but the partner did not go with her. When we have vaginal discharge, if we were treated and partner did not go for treatment, we can infect each other. So the problem is the partner did not want to see doctor. He said that he is lazy to see doctor, only ask for medicines. On the other hand, friends can facilitate access to health care services by making recommendations of where to seek care, as the Thai beer promoter described: Interviewer: Do your friends influence you to choose the hospital? Respondent: Yes, they do. If they told this hospital is good service, we will go. Thus friends and family can act as a support to the beer promoter, recommending health care services and encouraging access to services. Sometimes family acts as a barrier to health for beer promoters; in particular, uncooperative male partners who choose not to be treated can negatively impact on the beer promoters' health status. --- Limitations of the study There are two major limitations to this research. The first limitation results from our choice of conducting an international participatory research project. Training beer promoters as research assistants, though extremely important, did not change the fact that they were in fact novice data collectors. As a result, some of the qualitative data lacked depth. Opportunities for probing focus group members to elicit more detail were missed. While we did conduct a training session with the research team prior to data collection, it was limited in time and by language barriers. The Canadian researchers did not speak any of the Asian languages of the study, nor did any of the research assistants speak English fluently, thus direct observation of the research assistants' skills in conducting focus groups and surveys was not possible. In future research, we would budget for a longer training period where the investigators could model the data collection techniques for research assistants with accompanying translation. Despite the limitations of data collection, we feel that the choice of beer promoters as research assistants was useful for two key reasons. Firstly, the research assistants were able to make the beer promoters feel more at ease, and be willing to share more personal details than academic researchers could on their own. Secondly, this participatory research allowed the beer promoter research assistants to develop more skills that would improve their chances of getting a better job in the future. Another major limitation of the research was the sampling strategy of the survey. In all four countries, convenience sampling was used, which can result in a selection bias. While randomization would have been preferable in order to obtain findings that were generalizable to the entire population of beer promoters in each site, this was impossible to do. Randomization requires a complete census of the beer promoter population in each site and this was beyond the resources of this study. Indeed, a census would be a challenge to undertake given the diversity of workplaces, fluidity of this employment, and the marginalized status these migrant women. --- Discussion Rural to urban migrant women workers in Southeast Asia are among the waves of migrant workers in the region who move to larger cities or across borders in response to shifts in the globalized economy [26,27]. In this new urban environment, young women are compelled to negotiate amongst competing demands and desires from family and their peer network that vie for their attention and financial resources [28]. It is within this context that we situate the experiences and responses of the migrant beer promoters who engaged in our study. These data provide some interesting observations and comparisons about beer promoters working in these four Southeast Asian capitals. While generalizations about differences in beer promoters between countries cannot be made, overall trends should be noted. For example, the Thai beer promoters tended to be young and single and not supporting any other family members. Most of these women had employee health insurance. Thailand is the most well off of the four participating countries, thus these women were mainly supporting themselves alone, while many of the women from Cambodia, Laos, and Vietnam were supporting other family members. The Thai beer promoters were less likely to participate in sex work or drink with their clients. This may be because their beer company and restaurant employers did not permit this, or because they were working as beer promoters to supplement their income or support themselves as students and did not have to resort to sex work like many of the beer promoters from the other three countries. The health behavior data confirms that beer promoters are at risk of unwanted pregnancies and sexually transmitted infections. Over one third partake in sexual relationships outside of a primary relationship and close to a half of the women participate in sex work at least yearly. More than half of the women have had an abortion. With the exception of the Thai sample, over half have had a suspected or known sexually transmitted infection in the past. With the exception of the Thai cohort once again, most are drinking at work and a sizable proportion are getting drunk as a result, putting them at risk of making unwise decisions about their sexual health . It should be noted that drinking at work is sometimes required for those beer promoters who work on commission, thus is a risk of the job itself. As noted earlier, differences between countries may be reflected by different expectations of the employers. The high prevalence of these behaviors indicates that beer promoters require access to quality sexual and reproductive health care services, as they are at risk of unwanted pregnancies and sexually transmitted infections including HIV/AIDS. There was significant dissatisfaction with the current provision of health care services for beer promoters. In particular, cost of services, location and both environmental and service factors impact on access to health care services for beer promoters. External factors impacting on access to health care services include working hours: despite regulations in some countries, beer promoters often cannot get time off from work to get their health care needs met. Finally, personal factors such as shyness and fears, and lack of support from partners may prevent them from seeking care. Such barriers to care are not unique to beer promoters in these four Southeast Asian capitals. Unmarried female migrant workers in China often do not access reproductive health care services because of social, psychologic and economic barriers [29]. Researchers have called for tailored interventions and more research on this large population of women [30]. Sex workers in Asia also face challenges accessing health care services. While population-based interventions to provide reproductive health care services are available, it is estimated that HIV prevention services in Asia are accessed by less than half of sex workers [31]. Like the beer promoters in this study, sex workers in Thailand report that cost, location, hours of operation, and friendly service are important issues to them, though perceived effectiveness of care was the strongest determinant of where they would access health care [32]. Similarly, Nepalese sex workers also have challenges accessing health care services due to inappropriate clinic opening hours, attitudes of the service providers, lack of confidentiality, fear of public exposure, and higher fees for the services [33]. Sex workers in Chennai, India experience personal, family and health care system barriers when accessing free HIV prevention services: stigma, discrimination and negative interactions with health care workers are particularly important issues affecting access to care for these women [34]. There are similar challenges with access to reproductive health care for sex workers in Afghanistan [35] and in Singapore, especially for those working free-lance outside of brothels [36]. Addressing stigma towards sex workers within society and amongst health care providers has been theorized as a necessary step to improve access to health care services for this vulnerable and diverse population of women [37], and would have a positive impact on beer promoters as a significant number of them also engage in sex work. Table 6 lists potential barriers, and facilitators to improve access to sexual and reproductive health care services for these women. This table is intended to generate discussion to develop more specific recommendations for each community. One of the most important considerations at a policy level is to improve access to health care insurance. Either government or employer health care insurance, available from their community of residence not birth, would ensure that women have access to health care services when they need it. Other solutions require a more location-specific approach such as building clinics close to the work-place of beer promoters or providing mobile clinics that beer promoters can access easily. Improving access to health care institutions will require a recognition by the management of these institutions of the importance of serving this population of women and eliminating barriers preventing them from accessing service. Reducing waiting times for working women, ensuring space within the institution for confidential discussions and teaching health care providers the importance of confidentiality, keeping the clinic area clean, and particularly improving health care providers attitudes towards beer promoters -reducing the stigma they experience when accessing health careare universal issues to be addressed by health care institutions in all four countries. Addressing these issues would improve health care access for all users, not just Provide free transportation to health care institutions. Provide mobile clinics that visit locations where beer promoters work. --- Waiting Times Try to reduce waiting times by hiring more staff for busy periods and giving working women quicker access. Provide evening and weekend clinic hours. --- Confidentiality Ensure quiet spaces for confidential conversations and train staff about the importance of confidentiality. --- Cleanliness Stress cleanliness in institutions and have regular inspections by administration and/or government health inspectors. --- Staff Attitudes In-service training focusing on stigma against women who work in beer promotion and sex work. Surveys of clients to seek feedback on their experiences in the institutions and active incorporation of ideas to improve service. --- Lack of Medication and Materials Appropriately fund sexual and reproductive health institutions to have needed medications and materials. Facilitate referrals to higher levels of care as needed. --- Institution does not attract beer promoter clients Provide incentives to beer promoters such as prizes, food, social activities, etc. --- WORK FACTORS Lack of Time Provide evening and weekend clinic hours. Create and enforce government policies that require employers to provide access to health care services for beer promoters. --- PERSONAL FACTORS --- Shyness and Fears Education about importance of sexual and reproductive health care directed at beer promoters in workplace and media. Orientation of beer promoters to health care facilities and equipment by friendly staff. Women providers as needed for reproductive examinations. --- Lack of Knowledge Education and orientation as discussed above. --- Support of Family and Friends Education of male partners about the importance of treatment. Provide incentives to beer promoters to bring friends for assessment and treatment. beer promoters. In addition to policy changes and institutional improvements, there is a need to enforce the human rights of these workers in having time off from work to access health care services as needed. Finally, public health education campaigns targeting migrant workers and their sexual partners may help to improve understanding of the need for and availability of health care services. --- Conclusions Clearly, improving access to health care services for this population of women will require a multiple intervention approach, tackling factors both within and outside the health care system, at both institutional and personal levels. While the situation for beer promoters in each of these four Southeast Asian capitals has unique features, all of these women experience barriers to accessing sexual and reproductive health care services. It is our hope that local governments, employers and health care institutions will adopt some of these solutions to improve access and ultimately the sexual and reproductive health care status of these rural-to-urban migrant workers. --- Abbreviations AIDS: Acquired Immune Deficiency Syndrome; HIV: Human Immundeficiency Virus; ILO: International Labour Organization; NGO: Non-governmental Organization; UNAIDS: Joint United Nations program on HIV/AIDS. --- Competing interests The authors declare that they have no competing interests. ---
Background: The purpose of the research was to assess access to sexual and reproductive health services for migrant women who work as beer promoters. This mixed methods research was conducted in Phnom Penh,
Introduction Cities are currently dealing with several challenges. The urban population is growing as a result of an increasing world population and urbanization. Urban areas are often unhealthy places to live, characterized by heavy traffic, pollution, noise, social isolation, poor housing conditions, stress, and urban heat, resulting in the lower life expectancy of urban dwellers [1]. Urban green can play a role in tackling several of these challenges [2]. For example, urban parks can reduce heat, absorb noise, reduce air pollution, and store rainwater [3]. Additionally, living in an environment with more greenery positively influences well-being, as green helps people to relax and restore from stress [4,5], offers opportunities for social interaction [6], and promotes physical activity [7]. Moreover, it was found that spending more time in a park improves life satisfaction [8], and people who visited green spaces with a higher diversity of plants are happier [9]. Likewise, urban green spaces with greater biodiversity are likely to be associated with more positive emotional responses [10]. Thus, it is generally acknowledged that urban green is important for public health. In order to be able to design urban green spaces that are beneficial and attractive for different groups of urban residents, it is important to gain insights into the preferences of different target groups regarding urban park attributes. Based on these insights, guidelines can be derived for urban park designers and managers on what elements to include in a park. Therefore, several studies have investigated urban park preferences. Research has shown that several spatial characteristics of parks influence people's park preferences and experiences . First of all, both type and density of vegetation play a role in people's preferences [14][15][16][17][18]. Size and accessibility [17,19] of green are also relevant, and the presence of facilities such as playground equipment and benches have also been found to be preferred [12,20]. Cleanliness and maintenance are also likely to play a role in the preferences of park users [17,21,22], although not all studies have found significant effects [18]. In a Chinese study [17], quietness and beautiful views were also important reasons for using green spaces. Finally, the presence of other people [11], presence of water [16,23], and noise [24] are important for park users. Park preferences have been found to vary between different groups of people. Personal characteristics such as age [14,18,25], gender [18,26], household composition [18], education level [15,18,[27][28][29], and urban vs. rural place of residence [27] have been found to affect park preferences. Moreover, several studies have focused on ethnicity as a determinant of park preferences. While people from various ethnic backgrounds have all been found to prefer natural environments over built environments [22], several studies have indicated that people from distinct ethnicities have different preferences for urban park attributes [13,30,31] For instance, Ho et al. [13] studied park preferences in the United States and found that Hispanics and African-Americans preferred more the recreational facilities and traditional park landscapes. The study of Kaplan and Talbot [32] indicated that African-Americans did not prefer dense vegetation. Gobster [21] found that people with an Asian background valued the park's scenic beauty more, and people with a Latin American background the fresh air and lake effect. White people valued the trees and other park vegetation and African-Americans the facilities, maintenance, and activities. In a similar vein, Payne et al. [25] found that African-Americans tended to prefer the function of recreation over the conservation of park land. Differences in park preferences between people from different geographical areas could be explained by the fact that they have varying 'images of nature' [31], or differences in landscape preferences [31] or landscape styles [33]. People may have a preference for vegetation and landscape types that they are more familiar with. Yu [27] compared park scene ratings from Chinese and Western groups and only found weak differences. However, they indicate that "for some specific Chinese landscapes, macro-cultural differences do occur because the 'foreigners' lack the knowledge of cultural meanings embodied in the landscapes" [27]. According to Yang and Kaplan [33], a Western landscape style is based on geometry and symmetry, while an Oriental landscape style is non-geometrical and asymmetrical. However, these landscape styles and values of nature seem to change over time. Traditionally, Chinese parks consisted of an enclosed landscape with a winding path to a quiet place [34]. However, a process of globalization and Westernization has resulted in an increased number of parks with large lawn areas in Chinese cities and a growing preference for neatly maintained landscapes, though often with limited public access [34] or with entrance fees [17]. Moreover, the study by Buijs et al. [31] indicated that rather than symmetrical parks, "Native Dutch people are strong supporters of the wilderness image, while immigrants generally support the functional image" [31]. Moreover, differences in park preferences between people from different countries could be explained by the fact that they use parks for different activities, and thus value parks for different reasons [22]. Özgüner [22] found that Turkish people in Turkey used parks for passive activities , whereas Western people had a more active park use . Similarly, Yang et al. [34] indicated that Chinese people used parks for sitting and resting and social activities rather than active activities. The same differences between Chinese and Western residents were found in the United States [35]. Kloek et al. [36] studied participation and outdoor recreational behavior of Turkish and Chinese immigrants compared to non-immigrants in the Netherlands. Their findings showed that respondents of Chinese descent participated less often in recreational activities and mainly participated in individual-based activities such as walking, cycling, running, relaxing, yoga, and photography. According to Jim and Chen [17], the main purposes of the residents of Guangzhou for using green spaces are relaxation, quietude, physical exercise, nature appreciation, and aesthetic pleasure. Relaxation and enjoyment of nature as well as socialization and exercise are also mentioned by the residents of Singapore as being very important [37]. Although the role of ethnicity in park preferences has received some attention, the vast majority of studies into urban park preferences have focused on a single geographical context. Moreover, while some studies have focused on differences between native residents and immigrants in one country, only a few studies have compared park preferences of Western and non-Western groups living in different countries. This study aimed to contribute to this unexplored field by comparing the park preferences of Dutch residents in the Netherlands and Chinese residents living in China. This study is an extension of the study of Van Vliet et al. [18] in which we explored the influence of urban park attributes on user preferences using an online stated choice experiment in the Netherlands. Results showed that participants particularly valued a high number of trees and flowerbeds with a diversity of flowers, and to a lesser extent, the presence of benches and play equipment. Two groups were identified in that study, namely a group that could be described as a "nature-loving group" and a group that could be described as an "amenity-appreciating group". The study indicated that non-Dutch respondents were more likely to belong to the amenity-appreciating class, while the Dutch were more likely to specifically value the trees and flowers. However, the non-Dutch group was too small to draw any conclusions on the effects of ethnicity. In order to assess to what extent the preferences of the Dutch are generalizable to other nationalities and geographical contexts, the aim of this study was to explicitly compare the preferences of two distinct samples, namely, a group of Dutch respondents and a group of Chinese respondents. These groups were selected because they differed significantly in geographical location, climate, and indigenous vegetation as well as in activities and values. The current study extends the study of Van Vliet et al. [18] by using data of 540 Dutch respondents , complemented with data that were collected from 719 Chinese respondents living in China using the same online survey as in the Dutch context. In this study, the pooled data of 1259 respondents were analyzed with a random parameter mixed logit model to identify differences in the preferences for park attributes between Chinese and Dutch respondents. The model controlled for the effects of personal characteristics . The rest of this paper is organized as follows. Section 2 describes the data and methods, followed by the results in Section 3. In Section 4, the findings are discussed and directions for future research are presented. A short conclusion completes the article. --- Materials and Methods While most studies on park preferences use a qualitative approach consisting of on-site interviews with park users , some have used a quantitative approach consisting of surveys asking respondents to rate the importance of several park attributes [13] or a conjoint method in which they let participants evaluate several park alternatives [11,12,14]. While the qualitative approach allows for in-depth investigation of a problem, the number of respondents for these studies is usually low. Quantitative approaches such as the conjoint analysis method allows for data to be gathered on preferences of large amounts of people. Therefore, in this study, to investigate the differences between Dutch and Chinese preferences for parks, a stated choice experiment was conducted. The same research design and method of data collection was used as described in [18]. --- Setup Stated Choice Experiment Based on a literature review and an expert meeting , the attributes and attribute levels listed in Table 1 were selected. In order to create choice alternatives, these attribute levels were combined according to an orthogonal experimental design, generating 16 alternative parks. The hypothetical parks were presented using videos, whereby each video represented a walk through the park. Figure 1 shows screenshots of three alternatives with varying attributes. Choice sets were created by randomly combining two alternatives. Per choice set, the two videos were shown next to each other on the screen. Respondents were asked to watch both videos one after the other rather than simultaneously, so that they could pay attention to each video. They were asked to watch each video until the end and then answer the question "Which park would you prefer to visit?". To each of these pairs, a 'no preference' option was added, resulting in three alternatives per choice set. The 'no preference' option allowed us to estimate a constant, which represents the likelihood that respondents choose one of the two videos as the preferred one. Figure 2 shows a screenshot of a choice task, where video A is playing . --- Data Collection Data were collected by means of an online questionnaire. Watching the video of a hypothetical park took 26 s. As one choice task contained two videos, handling one choice task took roughly 1 min. To limit the total duration of the questionnaire, only four choice sets were presented to each respondent. The Dutch respondents were recruited via the survey panels of two cities in the south of the Netherlands, namely, Hertogenbosch and Eindhoven and via social media for more information. The Chinese respondents were recruited via an online survey platform that is accessible to all of the public between 30 September and 18 November 2020 . The Dutch study was approved by the Ethical Review Board of the Built Environment Department of the Eindhoven University of Technology. Respondents had a chance of winning one of ten gift cards worth 25 euros. On completion of the online survey, the respondents in China received 0.1~10 RMB at random. --- Data Collection Data were collected by means of an online questionnaire. Watching the video of a hypothetical park took 26 s. As one choice task contained two videos, handling one choice task took roughly 1 min. To limit the total duration of the questionnaire, only four choice sets were presented to each respondent. --- Data Collection Data were collected by means of an online questionnaire. Watching the video of a hypothetical park took 26 s. As one choice task contained two videos, handling one choice task took roughly 1 min. To limit the total duration of the questionnaire, only four choice sets were presented to each respondent. --- The Random Parameter Mixed Logit Model for Data Analysis The random parameter mixed logit model was used to analyze the stated choices of all respondents. This is a more advanced version of the well-known multinomial logit model , taking into account the panel structure of the data and taste heterogeneity. The basic multinomial logit model is defined as: P i = e V i ∑ j e V j where P i is the probability that an individual chooses alternative i from a set of alternatives, and V i is the structural utility of alternative i. The structural utility is the sum of weighted X-variables: V i = ∑ n β n X in The X-variables represent the levels of the attributes by means of dummy coding, resulting in L-1 parameters for each attribute with L levels . Per attribute, the expected least attractive level was coded 0 or 0 0, depending on the number of levels. Therefore, X in represents the score of the n-th variable of alternative i. β n is a parameter to be estimated for variable n. In the basic MNL model, the β-parameters represent the mean weights of the variables. However, the random parameter model not only estimates the mean effect of the variables, but also determines the standard deviation around the mean. This can be denoted as: V i = ∑ n β * n X in with β * n being a parameter randomly drawn from a normal distribution with mean β n and standard deviation σ n . The size of the standard deviation represents the amount of taste heterogeneity in the sample regarding that variable. The utility of the hypothetical alternatives depends on the attribute levels, represented by the X-variables . The utility of the 'no preference' option is defined by the constant and all other X-variables are equal to 0. --- Attributes --- Attribute Level Coding Constant Hypothetical park preference X 0 = 0 No preference X 0 = 1 Number of trees Some trees X 1 = 1, X 2 = 0 Many trees X 1 = 0, X 2 = 1 Few trees X 1 = 0, X 2 = 0 Composition of trees One cluster X 3 = 1, X 4 = 0 Multiple clusters X 3 = 0, X 4 = 1 Spread X 3 = 0, X 4 = 0 Public furniture Many benches X 5 = 1 Some benches X 5 = 0 Cleanliness No litter X 6 = 1, X 7 = 0 Some litter X 6 = 0, X 7 = 1 Much litter X 6 = 0, X 7 = 0 Paths Side paths X 8 = 1 One main path X 8 = 0 Playgrounds Playground X 9 = 1 None X 9 = 0 Flowers Mono-flowerbeds X 10 = 1, X 11 = 0 Diverse flowerbeds X 10 = 0, X 11 = 1 No flowerbeds X 10 = 0, X 11 = 0 We wanted to measure the differences between the preferences of Dutch and Chinese respondents. This means that we should test for differences in the parameters between the two samples. Therefore, we used contrast parameters . To estimate these contrast parameters, the specification of the utility of the alternatives has to be extended by adding a contrast variable. We added a q-index to differentiate between the two samples of respondents: V iq = ∑ n β * n X in + δ n ∆ q X in The contribution of a variable was then measured as X in , where β * n is the random parameter for the n th variable, contrast parameter δ n measures the difference between the mean β n for the Dutch and Chinese respondents regarding the n th variable, and contrast variable ∆ q is defined as +1 for Dutch respondents and -1 for Chinese respondents. If the δ n -parameter is significant, the mean weight of variable n differs significantly between the two samples. In addition, we estimated the standard deviation of the β-parameters for both countries separately, resulting in σ nc , with c representing the Netherlands or China. As the samples were quite different in some personal characteristics, (these personal characteristics should also be taken into account. Therefore, the interactions between personal characteristics and X-variables were added as follows: V iqc = ∑ n Therefore, for each X-variable n, the product of ∆ q X in with each of the 10 personal characteristics was added. The personal characteristics were effect coded as the mean effect of effect coded variables is equal to 0. If γ njc is significant, the j-th personal characteristic influences the preferences. Note that the effects of the personal characteristics may differ per country c . For each country separately, Equation can be rewritten as: V iqc = ∑ n β * n X in + [δ n X in + ∑ j γ njc X in Z qj ] , c = NL V iqc = ∑ n β * n X in -δ n X in + ∑ j γ njc X in Z qj , c = CN The model was estimated using a stepwise approach. First, a multinomial logit was estimated, and insignificant interaction effects were removed from the model, starting with the most insignificant effects, until all remaining interaction effects were significant at a p-level of 0.15. Next, the random components were added and the interaction effects not significant at a p-level of 0.10 were further removed. This was conducted to ease the interpretation of the model. The mixed logit model was estimated by using 1000 Halton draws to calculate the simulated probabilities, and by taking into account the panel structure of the data . --- Personal Characteristic Level Coding Gender Female Z 1 = 1 Male Z 1 = -1 Other/Missing Z 1 = 0 Age Younger than 35 Z 2 = 1, Z 3 = 0 35-54 Z 2 = -1, Z 3 = -1 55 and older Z 2 = 0, Z 3 = 1 Occupation Fulltime Z 4 = 1, Z 5 = 0 Parttime Z 4 = 0, Z 5 = 1 Unemployed/retired Z 4 = -1, Z 5 = -1 Missing Z 4 = 0, Z 5 = 0 Education level Low education Z 6 = -1 High education Z 6 = 1 Missing Z 6 = 0 Income level Low Z 7 = 1, Z 8 = 0 Medium Z 7 = -1, Z 8 = -1 High Z 7 = 0, Z 8 = 1 Table 3. Cont. --- Personal Characteristic Level Coding Prefer not to answer Z 7 = 0, Z 8 = 0 Missing Z 7 = 0, Z 8 = 0 Household With children Z 9 = 1 Without children Z 9 = -1 Missing Z 9 = 0 Disability Not disabled Z 10 = -1 Disabled Z 10 = 1 --- Results --- Sample Description Table 4 shows the descriptive results of the sample characteristics. The average age of people in the sample from China was much lower compared to the average age of people from the sample from the Netherlands . The Chinese sample consisted mainly of young respondents, while the majority of the Dutch sample belonged to the category of 55 and over. The total sample consisted of slightly more men than women. As can be seen, the sample from the Netherlands consisted of less people working full-time compared to the Chinese sample. The sample from China consisted of considerably more people who were living in a household with children compared to the Dutch sample. Regarding education, the high category consisted of respondents with at least a bachelor's degree. Respondents with a lower education level belonged to the 'low education' category. In both samples, the share of highly educated people was about 60%. For the Netherlands, the middle-income category represented respondents with a net yearly household income between 30 and 50 thousand euro, while for China, the middle-income category was defined between 100 and 300 thousand RMB per year. Compared to the Dutch sample, more Chinese respondents were in the low-income group, which was expected as the Chinese respondents were younger than those from the Netherlands. Finally, most people in the total sample did not have any disabilities. --- Random Parameter Mixed Logit Model Results Table 5 shows the results regarding the random parameter ML model as specified by Equation . The model performed well with McFadden's rho 2 = 0.226. Regarding the 'constant' , the mean effect was significant and negative . This means that the utility of the 'no preference' option was negative, although it should be noted that β 0 is a random value and can incidentally also take positive values. As X in is equal to zero for the hypothetical alternatives, the parameter does not affect the utility of these alternatives. Therefore, based on β 0 , the probability that the 'no preference' option will be chosen is in general smaller than the probability a hypothetical alternative will be chosen. However, there were differences between the Dutch and Chinese respondents as some of the interaction effect related to the constant were significant. For young Dutch respondents, the mean utility of the 'no preference' option was even more negative , but for older or 'highly educated' Dutch respondents, the mean utility increased by 0.571 and 0.274, respectively, making the 'no preference' option less unlikely to be chosen. Additionally, for the middle-aged Dutch respondents, the net effect was positive . On the other hand, Dutch part-timers were more reluctant to choose the 'no preference' option. For Chinese respondents with high incomes, the utility of the 'no preference' option increased by 0.29 and decreased by the same amount for the medium income group. Finally, the standard deviations regarding the error component were significant for both countries, meaning that there were significant taste differences within the Dutch and the Chinese respondents. As the standard deviation for Dutch respondents was larger than for Chinese respondents, it can be concluded that the likelihood of selecting the 'no preference' option showed larger differences between the Dutch than the Chinese respondents . The results regarding the attribute level 'some trees' were rather straightforward. For Dutch respondents, the mean part worth utility of 'some trees' was equal to 0.449 + 0.214 = 0.663 and equal to 0.449 -0.214 = 0.235 for the Chinese respondents. Note that the δ-parameter for 'some trees' was significant, indicating a significant difference between both samples. Interaction effects with personal characteristics were insignificant. This simply means that 'some trees' increased the utility of a hypothetical alternative by on average 0.663 or 0.235, depending on the country. Thus, Dutch respondents attached more value to 'some trees' than the Chinese respondents. Taste differences were reflected by different standard deviations. Now that the interpretation of the parameters has been explained, we concentrate on the main effects of the attributes and the main differences between the samples regarding the attributes and the standard deviations . Note that the main reason for incorporating the interaction effects with personal characteristics was to reduce bias in these parameters. Figure 3 graphically presents the parameters. Regarding trees, both samples in general preferred 'some trees' over 'few trees', and 'many trees' over 'some trees'. Thus, the more trees, the better. For the Dutch respondents, this effect was clearly larger. Dutch respondents were not likely to prefer parks with just 'one cluster of trees', instead, they preferred 'multiple clusters' and to a lesser extent 'trees being spread' over the park. This was different for Chinese respondents, who preferred both 'one cluster' and 'spread trees' over 'multiple clusters'. Just like with the number of trees, the Dutch respondents were more pronounced in their preferences. Regarding furniture, the Dutch clearly preferred 'many benches' over 'some benches'. The Chinese respondents did not have clear preferences regarding the number of benches. Remarkably, litter appeared to have no significant effect on the preferences. There were on average no differences between 'no litter', 'some litter', or 'much litter' for both samples. The Chinese respondents did show significant individual differences regarding their preferences for litter. Part of the Chinese respondents preferred 'no litter' or 'some litter' over 'much litter'. 'Side paths' were slightly preferred over just one 'main path' by the Dutch sample, while the Chinese respondents clearly preferred the 'side paths'. Now, both samples showed high standard deviations, indicating severe differences in preferences within each sample. The Chinese sample was on average not in favor of a 'playground', while the Dutch preferred having a 'playground' in the park, especially when they had children. Still, there was a lot of taste difference in the Dutch sample. Furthermore, regarding flowerbeds, the Chinese respondents did not have clear preferences on average, although the appreciation of 'diverse flowerbeds' differed considerable between the Chinese respondents. The Dutch respondents agreed that 'monotonous flowerbeds' and to a higher extent 'diverse flowerbeds' added value to parks compared to 'no flowerbeds' at all. --- Discussion and Future Research Directions This study aimed to gain more insights into the park preferences of Dutch and Chinese residents, especially the differences between these two groups. First, the negative constant indicates that respondents from both countries were very unlikely to choose the 'no preference' option. This suggests that they noticed differences between the alternative videos and had a preference of one alternative over the other. Next, the findings indicated that both groups preferred parks with 'many trees'. Dutch respondents had a more outspoken preference for this attribute compared to the Chinese respondents. Dutch respondents preferred trees in multiple clusters or trees being spread over the park, while Chinese respondents preferred trees spread or trees in one cluster over multiple clusters. The Dutch also showed a stronger preference for flowerbeds, especially for 'diverse flowerbeds', compared to the Chinese respondents. This is in line with the study by Buijs et al. [31], who found that Dutch people tended to prefer wilderness images compared to immigrants in the Netherlands. In addition, a study by Gobster [21] showed that White people preferred trees and other vegetation, and Asian people valued the scenic beauty of a park more. Other studies [17,34] have indicated that Chinese residents increasingly valued parks with well-designed and maintained large green sites. Therefore, it is likely that Chinese people prefer more open parks compared to Dutch people, who prefer more trees and wilderness aspects. The results showed a general preference for parks with 'side paths', although this preference was stronger among the Chinese respondents. This may not support the findings by Kloek et al. [36], who found that Chinese immigrants were more involved in individual activities such as walking and running, however, this study compared respondents from different countries. The high standard deviations indicated significant individual differences in preferences within each sample. Dutch respondents were found to show a strong preference for parks with 'many benches', whereas Chinese respondents did not have a clear preference for benches. This is in contrast to [34], concluding that Chinese people used parks for sitting and resting. Dutch respondents were also found to prefer having a 'playground' in the park, especially when they had children, while the Chinese sample seemed indifferent regarding the playground. This might be related to the fact that Chinese people value parks for their quietness and beautiful views [17] and use them for less active activities [34]. The amount of litter was not found to affect the park preferences of either the Dutch or the Chinese, although the Chinese respondents showed significant individual differences regarding their preferences. This is in contrast to the findings of other studies that indicated that cleanliness and maintenance affected the park users' preferences [11,21,22] and were specifically valued by Chinese residents [17,34]. A possible explanation for the fact that we did not find a significant effect of litter could be that the litter was not very notable in the virtual environments and the virtual environments generally looked rather clean. Moreover, the virtual environments did not include smells related to litter or dog excrement in real parks. Table 6 shows the most preferred attribute levels for each country. As can be seen, there were clear differences between the two samples regarding the composition of trees and the presence of a playground. The Dutch respondents preferred multiple clusters of trees, while the Chinese respondents preferred trees to be spread or in one cluster. In addition, the Dutch sample preferred a playground, while the Chinese sample did not. While the Dutch had a preference for many benches, the Chinese had no clear preference regarding public furniture. Table 6 also shows that for cleanliness, the respondents of both countries had no clear preference. The overview in Table 6 can be used as guidelines by urban park designers. While we found some significant differences in urban park preferences between the Dutch and Chinese respondents, it is not clear how these differences can be explained. As indicated in the introduction, several possible explanations exist. The differences in preferences could be due to differences in the preferred activity types at parks, or to differences in vegetation types or landscapes that people are familiar with. Further research is needed to investigate the mechanisms underlying these differences in preferences for park attributes. Aside from differences between the two countries, several differences were found within the samples of Dutch and Chinese respondents related to personal and household characteristics such as age, gender, work status, income, household composition, and physical ability. This is in line with several other studies that found personal characteristics affected park preferences . The significant standard deviations showed that there were preference variations related to the number of trees, litter, paths, playground, and flowerbeds. For urban designers, it is therefore important to take these differences into account and refrain from designing parks for 'average' residents. Parks should be inclusive and attractive to different target groups, varying in ethnicity, age, gender, and physical ability. While this study has provided relevant insights in the park preferences of Dutch and Chinese residents, several directions for further research can be given. First, using a stated-choice approach limits the number of attributes that can be included. For instance, this study only manipulated the number and composition of trees, while other studies have found that the height of trees noise) is important to predict people's subjective well-being [41]. Future research could analyze the preferences for trees in more depth. In addition, preferences regarding types of flowers and wildlife habitats could be analyzed in more depth. Moreover, we used only one specific park design as a baseline for the choice alternatives. As a result, the generalizability of the findings to other types of parks is limited. The base park was designed by taking a typical Dutch neighborhood park into account, of around 3.5 hectares with grass, beeches and birches surrounded by semi-detached and detached houses, and three apartment blocks. It could also have been designed by taking into account a typical park in China. Although the use of videos of virtual environments is useful and more reliable than static images to investigate preferences regarding environments [42], evaluations of virtual and real environments have been found to differ [43]. Moreover, the method of using videos of virtual environments is rather passive. Respondents might feel more engaged or more present in the environment when they can walk through and interact with the environment by using their keyboard or in an immersive VR environment. Another limitation of this study concerns the maintenance of the park. While we manipulated the amount of litter, no significant preferences were found regarding this attribute. This might be due to the fact that the variations in cleanliness were more subtle than variations in other attributes. While the litter was of a realistic size, people may not have noticed the manipulated variable. Still, it would be expected that the degree of cleanliness in a park would be important to users, for instance, for their sense of safety. It would also be relevant to test the effect of smell in this regard. This could be included in a immersive virtual reality lab-based experiment with virtual parks, or in a study with real-time park environments. Further research could focus on the design and presence of equipment and amenities such as litter bins, public toilets, dog walking areas, and dog toilets. In addition, it would be interesting to further examine the influence of the maintenance of greenery. Attributes such as the length of the grass, the presence of weeds, and the wilderness of the flowerbeds could also be manipulated in virtual parks. This is likely to make these virtual parks more realistic. In our stated choice experiment, we only asked respondents to indicate which park alternative they preferred. However, we do not know why they preferred a park. Further research should aim to understand how urban park attributes affect satisfaction as well as affective experiences or emotions. This could help to design parks where people feel safe and happy, or experience a sense of place, which in turn can contribute to their subjective well-being. Aside from the spatial attributes of the park, other aspects of a park visit such as type of activity, time of the day, company, and time spent could be important influences on people's preferences and subjective well-being [44]. For example, people who visit a park for a walk on their own probably have different preferences than people who go during the afternoon to the park with their children. Future research on park preferences should incorporate these aspects. Finally, this research could be expanded to other geographical or cultural contexts to further investigate differences in urban park preferences. In addition, more detailed research related to the use of parks and the effect on subjective well-being would be welcome. This could provide relevant guidelines for the design of inclusive parks that are attractive to different target groups. --- Conclusions This study used an online stated-choice experiment with videos of simulated parks to compare the preferences of Dutch and Chinese residents regarding different park attributes . Data of 1259 respondents were collected: 540 Dutch respondents and 719 Chinese respondents. The data were analyzed with a random parameter mixed logit model to identify differences in preferences for park attributes between the Chinese and Dutch, while controlling for the effects of personal characteristics. The results showed that the Dutch had stronger preferences for more trees and flowers, more benches, and play facilities, while the Chinese valued multiple paths in the park. There was a striking difference regarding the composition of trees. The Dutch liked parks with multiple clusters of trees and strongly disliked parks with only one cluster of trees. In contrast, the Chinese disliked parks with multiple clusters of trees. This study confirms that differences in park preferences exist between Dutch and Chinese residents. These differences are likely to be related to differences in park use , different images of nature, or landscape preferences . In addition to differences between the respondents of the two countries, the results showed significant standard deviations, indicating that there were taste differences in park preferences within the two samples. Personal characteristics were added to the model as control variables because the samples differed in these characteristics. While our aim was not to explicitly study the effect of personal characteristics on the preferences for park attributes, the significant interaction effects show that park preferences are related to age, household composition, income, and physical ability. The findings of this study can be used as design guidelines by urban planners and landscape designers to design attractive and inclusive parks for different target groups. --- Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijerph19084632/s1, Video S1: Video of alternative 7. --- Data Availability Statement: Data are available upon request to the corresponding author. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. ---
Urban parks play an important role in tackling several urban challenges such as air pollution, urban heat, physical inactivity, social isolation, and stress. In order to fully seize the benefits of urban parks, it is important that they are attractive for various groups of residents. While several studies have investigated residents' preferences for urban park attributes, most of them have focused on a single geographical context. This study aimed to investigate differences in park preferences, specifically between Dutch and Chinese park users. We collected data in the Netherlands and China using an online stated choice experiment with videos of virtual parks. The data were analyzed with a random parameter mixed logit model to identify differences in preferences for park attributes between Chinese and Dutch citizens, controlling for personal characteristics. Although the results showed a general preference for parks with many trees, several differences were found between the Dutch and Chinese respondents. These differences concerned vegetation (composition of trees and flowers), the presence of benches and play facilities, and could probably be explained by differences in park use, values of nature, and landscape preferences. The findings of this study can be used as design guidelines by urban planners and landscape designers to design attractive and inclusive parks for different target groups.
Introduction This study explores the motivations of owners of small to medium-sized enterprises in Indianapolis's fourteen Black-majority neighborhoods to create social value while maximizing profits. Social entrepreneurship began drawing the attention of academic researchers in the 1990s . The early definitions discussed the roles of mainly nonprofit organizations and their executives as agents of social change. Dees refers to a social mission carried out with the discipline, innovation, and determination to run a business. Thompson et al. discuss improvements to meet social needs that the state welfare system has not met. Recent definitions have added the term commercial entrepreneurship to explore the role of business ventures as agents of social change . Other authors refer to sustainable ventures that combine "business principles with a passion for social impact" and a desire to change the social equilibrium to reach a better state . Social entrepreneurs are individuals who strive to create social value by employing business concepts in new and "entrepreneurial" ways that improve the lives of marginalized segments of a population who are unable or incapable of improving their conditions . Martin and Osberg define a social entrepreneur as someone who "targets an unfortunate but stable equilibrium that causes the neglect, marginalization, or suffering of a segment of humanity, which brings to bear on this situation his or her inspiration, direct action, creativity, courage, and fortitude; and who aims for and ultimately affects the establishment of a new stable equilibrium that secures permanent benefit for the targeted group and society at large" . The discrimination experienced by Black entrepreneurs in this country has deprived them of From this context, the study explores the following two research questions: 1. How can business owners of SMEs in the city's Black-majority neighborhoods be considered social entrepreneurs? 2. How do business owners overcome existing barriers to owning a business to improve lives in their neighborhoods? --- Literature Review --- Business owners in Black-majority neighborhoods The widely held stereotype that Black-majority neighborhoods are poor and dangerous places is misleading. Black-majority neighborhoods are diverse and changing like other neighborhoods, but poverty and unemployment remain a reality . A review of the available literature points to conflicting strategies to address these issues. For example, Porter ) argues against investing in minority supplier development programs because Blackowned businesses are too small to contribute significantly to neighborhood revitalization. On the other hand, Sawicki and Moody argue that Black-owned businesses in depressed neighborhoods contribute to regeneration by offering employment opportunities to a disadvantaged labor force. Several studies explored the challenges minority business owners face in accessing capital compared to equally credit-worthy White-owned businesses . Moreover, among the minority community, Black-owned businesses with similar credit scores as other minority-owned businesses are the least likely to access the financing they need . Several studies of firm-level data document this disparity . For example, a mystery shopping study concluded that White borrowers were more likely to be offered follow-up appointments with bankers than better-qualified Black borrowers. The authors concluded that a central challenge to Black entrepreneurs is not a lack of assets but access to capital and credit needed to grow their businesses. Thompson Cochran et al. argued that passing the Community Reinvestment Act , revised in 2020, remains one of the more promising strategies for helping underserved populations and communities accumulate savings and access credit. CRA pressured commercial banks and thrifts to consider the credit needs of all borrowers, including those in low-income neighborhoods . The pressure applied to banks resulted in a net increase in minority owners seeking financing to start or expand businesses in these neighborhoods . The average level of startup capital for Black entrepreneurs is around $35,000, a third of their White counterparts . Perry reports that almost 40 percent of Black business owners feel discouraged from applying for business loans. Less than a fifth of them say receiving assistance from loan officers in completing business loan applications. According to the Network Journal , 80 percent of Black-owned businesses fail within the first 18 months. Common reasons cited include poorly designed business plans, lack of market research, poor management, and no access to capital. Black-owned businesses in black-majority neighborhoods are essential to the financial backbone of their communities. In addition, black-owned businesses are more likely to hire Black workers . According to Maxwell et al. , Black Americans own less than 20 percent of small businesses with employees. That translates to just over 2.6 million Black-owned Events and Tourism Review Vol. 6 , 11-32, DOI: 10.18060/27602 Copyright © 2023 Sotiris Hji-Avgoustis and Suosheng Wang This work is licensed under a Creative Commons Attribution 4.0 International License. businesses. Their numbers have risen by more than 30 percent in the past decade, but many are struggling to survive, generating revenues of less than $100,000 annually and having few employees. A Brookings report points out that Black-owned businesses create an average of ten jobs, while non-Black businesses create twice as many jobs. If Black-owned businesses were to match their White counterparts in job creation, they would start over 1.6 million jobs. The average pay for a non-Black-owned business is around $51,000 per year. If Black-owned businesses matched that, their employees would earn a combined $24 billion per year. Moreover, if Black-owned businesses earned the same revenue as non-Black-owned businesses, total revenue would increase by $5.9 trillion. Investments at these levels would go a long way to reducing social disparities in Black-majority neighborhoods. There are encouraging signs that point to a rise in the number of Black business owners across the nation since the recent pandemic. Revised estimates from the University of California, Santa Cruz , reported initially by Fairlie , point to an increase of 38% in Black-owned businesses between February 2020 and August 2021. In Indianapolis, the Indy Black Chamber of Commerce reported a 33% increase in membership, adding eighty-two new small businesses to their organization in 2021 . Theoretical framework Shane and Venkataraman define commercial entrepreneurs as those who can mobilize resources to turn opportunities into businesses. Another important consideration is the social context's influence on commercial entrepreneurs' behavior . Sarason et al. describe commercial entrepreneurship as a "social undertaking" only explained within the context of social systems. As commercial entrepreneurship increasingly becomes more socially aware , society's expectations of businesses' social outcomes will continue to rise . Like their commercial counterparts, social entrepreneurs' interests also lie in identifying and pursuing opportunities through creating and operating new businesses, often to reduce customer needs. In many instances, their interests are even more ambitious than those of commercial entrepreneurs as they seek to address societal challenges, such as poverty and social exclusion, within the context of social value creation . Perrini describes that social value can improve social conditions, such as working conditions, employment opportunities, or integration and participation within the community. For this paper, we define the term social entrepreneurship as an entrepreneurial activity with a social purpose in either the for-profit sector , corporate social entrepreneurship , or the nonprofit sector . The underlying desire is to create social value . Seelos and Mair discuss the creation of businesses to serve the disadvantaged by creating innovative solutions to solve existing problems. Social entrepreneurs value profit generation and an opportunity to create change by providing community value towards building a sustainable community . This definition supports the broader understanding that social entrepreneurs aim to improve their communities' well-being through actions . Choi and Majumdar , Driver , and Porter and Kramer go a step further and praise the benefits of integrating economic benefits with social value creation. This integration becomes the main characteristic of social entrepreneurship . Events and Tourism Review Vol. 6 , 11-32, DOI: 10.18060/27602 Copyright © 2023 Sotiris Hji-Avgoustis and Suosheng Wang This work is licensed under a Creative Commons Attribution 4.0 International License. Several recent academic case studies address the impact of social entrepreneurship in and for marginalized communities, often in developing countries . However, very few studies investigate the impact of SMEs as agents of social change in underserved communities in developed countries, particularly in the United States. Furthermore, the few studies available focus on the challenges faced by ethnic minority and women-owned businesses , while a few others focus on the role of cultural entrepreneurship in underserved urban areas . From the above theoretical position, we use an adapted framework proposed initially by Kimbu and Ngoasong to investigate whether business owners in Black majority neighborhoods can be social entrepreneurs. In doing so, we explore how they overcome existing barriers to owning a business while improving the lives of people around them. Kimbu and Ngoasong proposed the original framework to illustrate how women tourism entrepreneurs in Cameroon could meet their own commercial and social transformation goals while contributing to meeting the needs of the very poor and underprivileged in their communities. --- Research context and methods --- The context of business owners in Black-majority neighborhoods Most of the literature on minority entrepreneurship or entrepreneurship in disadvantaged communities has focused primarily on the entrepreneurial activities of immigrant groups. Entrepreneurial activity in Black-majority neighborhoods has received little attention in the academic literature. According to the Center for Research on Inclusion and Social Policy , 48% of Black Marion County residents live in a Black-majority neighborhood where 88% of the homes are valued less than the county's median home value. Our research setting is Indianapolis, the capital of the state of Indiana and the seat of Marion County. Indianapolis has a population of almost 900,000; 29% of the population is Black or African American . The fourteen Black-majority neighborhoods that comprise the study's population are identified using Indy Vitals . This online community information system relies on population data from the US Census Bureau's 2020 Decennial Census . Indianapolis presents a stark contrast between socioeconomic statuses, and this study aims to identify social entrepreneurial activity in some of the city's most depressed neighborhoods. We identified fourteen Black-majority neighborhoods whose demographic indicators, such as median household income and poverty rates, trail other neighborhoods in the city. Zip codes are only utilized to define targeted neighborhoods. Therefore, some neighborhoods cross over multiple zip codes. In this project, the data is organized mainly by the 2019 census data gleaned from Indy Vitals and the SAVI database run through the Polis Center in Indianapolis. In terms of institutional context, the devaluation of the housing market in the city's Blackmajority neighborhoods points to the lasting effects of past racist policies. These policies prevent Black families from building wealth and make it much harder for business owners to open and maintain a business than their counterparts in other neighborhoods. A recent SAVI report studied over a million loan applications filed in Central Indiana between 2007 and 2020. It concluded that there are still racial and geographic differences in loan denial rates. The loan denial rates tended to be higher in low-income neighborhoods and neighborhoods of color. A reason for this disparity is that the applicant's income and creditworthiness are not the only criteria considered. Lenders consider a neighborhood penalty when approving or denying home purchase and refinance loan applications . --- Research design and data collection More research is needed about SMEs' contributions to neighborhood development, their characteristics, and their owners' motivations . These business owners face challenges ranging from higher crime rates, poor infrastructure, and poor employee skills to difficulty accessing capital to sustain and grow their businesses . For example, a study by Morland et al. found that wealthier neighborhoods have four times as many supermarkets as Black-majority neighborhoods, and the stores in Black-majority neighborhoods are much smaller. Kugler et al. reported that businesses in low-income areas have fewer employees, lower average payrolls, and are more likely to have no paid employees than businesses in other neighborhoods. Moreover, most businesses in low-income areas are Blackowned . We adopted a mixed methods research design consisting of online structured questionnaires and informal observations from the study's participants. This approach is considered suitable for studying SME owner-operator goals and the performances of their firms . Study participants were selected using purposive samplings, a non-probability sampling in which researchers rely on their judgments when choosing members of the population to participate in a study . The selection of participants was based on three criteria: • Geographic sampling online. These community gatekeepers represent organizations that have already earned the trust of their respective communities , and their involvement in promoting the study allowed the researchers to gain trust, establish rapport, and form empathetic, non-hierarchical relationships with neighborhood business owners. This phase of the study lasted between August 2021 and May 2022. Since some business owners spoke Spanish, the questionnaires were available in Spanish and English. During the recruitment phase, the lead author discussed the study and offered to answer questions. These informal conversations led to valuable data that complemented and added to the data generated through the questionnaires. A review of contemporary qualitative research literature shows that researchers are using informal conversations as part of participant observation . As Swain and King point out, informal conversations in specific settings may be the best and sometimes the only way to generate good data. The use of these conversations as a secondary data collection method met the following criteria to avoid participant identification: a) notes were not to be organized in a way to allow for the attribution of comments to a particular participant; b) no direct quotes were to be used, and c) no audio or video recording to document the conversations. A total of 94 completed and valid surveys were collected. The data were exported from Qualtrics to SPSS for quantitative data analyses, including descriptive statistics, independent samples t-tests, and multiple regression analyses. Specifically, respondents' demographic profiles and business backgrounds were explored with frequency analysis. The mean ratings of all perceptual items were reported. The independent samples t-tests were employed to understand how people in different demographic and business groups perceived the items differently. Finally, multiple regression analyses were conducted to identify the significant factors in predicting the social entrepreneurs' motivation, challenge, business success, and community involvement. --- Findings --- Respondents' Demographic Profiles As shown in Table 3, male and female respondents are equally distributed, with male respondents accounting for 55.3% of the total sample. The mode age group is the 50s' , representing 44.1% of the total. Respondents over 50 represent 72.1% of the sample. While those between 30 and 49 years of age make up 26.9% of the sample, only one participant is younger than 30. Among the respondents, 38.3% are college graduates or have post-graduate education. Around a quarter of the respondents are singles, 42.6% are married, and the others are widowed, divorced, or separated. Concerning race, over half of the respondents are black or African American , 27.3% are white, and the rest are Asian or of other races. Respondents' Business Backgrounds About half of the respondents have owned the current business for more than ten years, nearly 30% for 6-10 years, and the others for five years or less. Regarding business size measured by total assets, 41.3% of the businesses are worth $100,000 to $500,000, 33.7% are worth less than $100,0,00, while the other businesses are worth more than $500,000. None of the businesses exceed $1 million in assets. Participants operate a diverse group of neighborhood businesses; a fifth of these businesses are restaurants and coffee shops. Regarding business lifecycles, 39.1% are at the growth stage and 38% at the maturity stage. Only 12% are startups. A tenth of the businesses employ only one person, half employ 2-5 people, and around a quarter employ 6-10 people. Only one business employs more than 50 people. Regarding sources of capital, the vast majority report some form of self-financing followed by financing from family and friends . Commercial banks make up the third most prevalent source of financing . --- Ratings of the Perceptual Items The perceptual items include motivation for starting a business, barriers to starting and running a business, factors affecting business success, and benefits to community involvement. Based on the mean values, the top-rated business motivations include: 1. To gain economic independence/advancement and to be my own boss 2. To improve the quality of life in the neighborhood 3. To contribute something useful to society The primary business barriers and challenges based on the mean values include: 1. Minority business owners encounter more significant difficulties than others in creating a new business 2. Minority business owners find less support in society to create a new business than the majority group 3. Availability of capital . The top three business success factors include: 1. Ability to manage the business successfully 2. Ability to identify business opportunities 3. Ability to solve problems facing the business The top community involvement benefits include: 1. Makes my community stronger 2. Helps me build business relationships 3. Increases sales for my business 4. Broadens my business network 5. Preserves the culture and traditions of the community . --- Results of Independent Samples t-test Independent samples t-tests were conducted to understand better how people with different backgrounds perceived these items. The independent variables include gender , education , race , business age , business status , business size and years of experience , respectively. As illustrated in Table 4, the different groups perceived many significant items differently. All items in Table 4 are significant at the threshold P-value of 0.05; the positive t-values indicate that, for a grouping variable, the mean ratings of the first group are significantly more significant than the second group. The negative t-values indicate the reverse result. Specific patterns of the perceptual differences of the items between the groups are notable. For example, regarding motivation for starting a business, males find the listed items more important in motivating them to start a business than females. In terms of barriers and challenges encountered, the listed barriers and challenges are more critical to the success of businesses by males, the relatively less educated Black Americans than non-Black Americans, businesses in operation five years or less, businesses at the maturity and survival stages, and the ones with relatively more people employed. Regarding the success factors, male businesspeople and those with higher education attainment levels perceived these abilities as more important in contributing to the success of a business than the other groups . Finally, concerning the benefits of community involvement, males, the more educated, businesses with more than five years in operation, businesses at the maturity/survival stages, and business owners with longer years of business experience view community involvement as a significant contributor to the success of their businesses. Between some college or less and college or more: --- Motivation for starting a business To take on the challenges and risks that go with owning/running a business ; To seek greater recognition and social prestige ; To have my own business so that my children will inherit it ; To advance further professionally rather than working for others ; To leave a previous job ; Overall motivation --- Barriers to starting and running a business Lack of qualified employees ; Too much competition ; Availability of capital Too much state interference/bureaucracy ; Weak economy ; Absence of positive business role models ; Racial discrimination ; Minority business owners encounter greater difficulties than others in creating a new business ; Minority business owners find less support in society to start a new business than the majority group ; Unsafe surroundings ; Balancing family and work life ; Overall business challenge --- Factors affecting business success Ability to identify business opportunities ; Ability to use creativity in producing and/or selling something new ; Ability to solve problems facing the business ; Ability to manage the business successfully --- Benefits to community involvement Increases sales for my business ; Helps me Build business relationships ; Helps me to model positive values for employees and applicants ; Provides opportunities for feedback from customers ; Makes my community stronger ; Preserves the culture and traditions of the community ; Overall community involvement --- Between non-black and black Americans Barriers to starting and running a business Too much competition ; Racial discrimination ; Minority business owners encounter greater difficulties than others in creating a new business ; Minority business owners find less support in society to create a new business than the majority group --- Benefits to community involvement Overall community involvement Between 10 years or less and more than ten years of business age: --- Motivation for starting a business To gain economic independence/advancement and to be my own boss --- d) Overall, how successful has your business been with community development? As indicated in Table 5, the adjusted R 2 shows the total variance explained by the independent variables remaining in the models, ranging from .168 to .385. The F-ratios are all significant , indicating that these results could hardly occur by chance. Furthermore, the t-values of the independent variables remaining in the models range from 2.681 to 4.449, meaning that all the partial correlations are statistically significant at 0.05. The results show that one determining variable remains in the model of "overall, how motivated are you in starting a business. Two significant variables remain in the model "overall, how challenging is it to start/run a business. One determinant is significant in predicting "overall, how successful do you think you have been starting/running a business. Finally, two variables are significant predictors of "overall, how successful has your business been with community development." --- Discussion and Conclusion The challenges faced by business owners in Black-majority neighborhoods persist today. For example, a Fair Housing Center of Central Indiana report found that two-thirds of White families own their homes, while only a third of Black families are homeowners. In addition, a Brookings Institute report estimated that houses in Black-majority neighborhoods in the Greater Indianapolis Metropolitan Statistical Area are valued at $18,000 on average less than similar houses in non-Black-majority neighborhoods. The report also pointed out that homes in Black-majority neighborhoods are valued at $48,000 on average less than similar Events and Tourism Review Vol. 6 , 11-32, DOI: 10.18060/27602 Copyright © 2023 Sotiris Hji-Avgoustis and Suosheng Wang This work is licensed under a Creative Commons Attribution 4.0 International License. houses in non-Black-majority areas. Finally, a different report compiled by the Indiana University's Center for Research on Inclusion and Social Policy estimated the 2018 median value of houses in Indianapolis' Black-majority neighborhoods to be $87,821, two-thirds of the city's median home values that same year. Progress in overcoming such challenges is slow, and recent news stories highlight the lasting impacts of past discriminatory practices, including redlining in cities across the country. For example, in October 2021, Trustmark National Bank in Tennessee agreed to invest $3.85 million in loan subsidies and to open one loan production office in a majority-Black and Hispanic neighborhood in Memphis, Tennessee . In addition, Old National Bank in Indianapolis has agreed to settle a federal complaint filed by the Fair Housing Center of Central Indiana by agreeing to expand mortgage lending opportunities to Black borrowers in majority-Black neighborhoods in Indianapolis. As a result of past and current obstacles to business ownership, only 4% of businesses in Indianapolis are owned by Black business owners. This is based on data compiled by Common Future and Next Street . However, according to our study, Black business ownership is 58% in the city's Black-majority neighborhoods. The national trends are similar, where only 2.3% of all businesses are Black-owned, even though Black people comprise 14.2% of the population . By applying the principles of social entrepreneurship, the study highlights the critical role of business owners of small to medium-sized enterprises in Indianapolis's fourteen Blackmajority neighborhoods to create social value while maximizing profits. As the results of our study point out, the top three motivators for starting a business in Black majority neighborhoods include: gaining economic independence/advancement and being my own boss, improving the neighborhood's quality of life, and contributing something useful to society. These three motivators align with the foundation of the Kimbu theoretical framework. Social transformation goals, commercial goals, and community needs are the three motivators of women tourism entrepreneurs in Cameroon. In our study, the social transformation goals of these entrepreneurs include their valuable contributions to society, such as offering employment opportunities to their neighbors. Their commercial goals include their efforts to gain economic independence by becoming their own boss. In doing so, these entrepreneurs forge their opportunities to build wealth and legacy. Lastly, in terms of meeting neighborhood needs, these entrepreneurs are improving the quality of life of their fellow citizens by filling gaps for Black consumers and providing goods and services that reflect everyday needs and interests. In doing so, they help keep dollars circulating within these neighborhoods rather than supporting businesses owned by outsiders with no apparent ties to the neighborhood. The informal conversations with study participants reaffirmed the existence of many barriers to starting and running a business. Black entrepreneurs face significant inequities in the business world with direct consequences on the communities they serve, especially in Black-majority poor neighborhoods. They include lost job opportunities and inadequate access to essential goods and services that other communities take for granted, all contributing to declining quality of life. At the macro level, these inequities undercut the potential of millions of people of color to bolster the country's economic system and address many injustices. The informal conversations also reaffirmed the study's findings regarding what these business owners need to overcome these barriers. They include technical assistance in creating new businesses, including access to resources and business opportunities. Participants also identified the need to join robust business networks that offer them representation and participation in networking, mentorship, and sponsorship programs to help them overcome some sociocultural barriers. Thirdly, Events and Tourism Review Vol. 6 , 11-32, DOI: 10.18060/27602 Copyright © 2023 Sotiris Hji-Avgoustis and Suosheng Wang This work is licensed under a Creative Commons Attribution 4.0 International License. access to capital continues to be a significant barrier. In our study, the primary sources of financing were self-financing and financing from family and friends, followed by financing from commercial banks. Since Black business owners tend to have fewer resources, personally or through family and friends, they depend more on outside funding.
The study explores how social entrepreneurship is shaped by its environment and particular institutional arrangements and contextual factors by focusing on the nature of engagement and participation by business owners in Indianapolis' fourteen Black-majority neighborhoods. The study seeks to answer the following two research questions: a) In what ways can these small business entrepreneurs be considered social entrepreneurs, and b) how do these small business entrepreneurs overcome existing barriers to contribute to the revitalization of their neighborhoods? The study uses an exploratory research design involving an online structured questionnaire and informal conversations with study participants. Selected participants met three criteria: a) geographic sampling, b) institutional context, and c) different business typologies. Findings from the survey and the informal conversations point to the daily struggles many small businesses face as they struggle to survive, including access to financial and human capital, generating low revenue, and operating with few employees.
INTRODUCTION Globally, the involvement of adolescents in risk-taking behavior has reached an alarming level . In Sub-Sahara Africa where more than eight out of 10 of the world's HIV-infected adolescents live, the issue of risk-taking is worse . Tull et al. define risk-taking behavior as the "tendency to engage in behaviors that have the potential to be harmful or dangerous, yet at the same time provide the opportunity for some kind of outcome that can be perceived as positive." The major problematic risky behaviors among young people either in and out of school has been reported to include tobacco, alcohol, and illicit drug use, risky sexual behavior, and self-harm . There have also been increasing reports of high school student's engagement in non-suicidal self-injury , which has exacerbated the youth's suicide incident . Based on vulnerability to risk-taking, adolescents, who are best defined as young people aged 10-19 years, have been labeled as the most susceptible to the adoption of risky behaviors . This is the case because this challenging developmental period is marked by increased levels of curiosity and self-doubt which heightens the potential for engaging in risk-related activities . Additionally, significant physical, cognitive, and psychological changes as well as sexual development occurs which prompts sexual experimentation . With the developing young men, sexual forces are awakened which fuels the likelihood of engagement in risky sexual behavior . Adolescent girls experience the appearance of the first menstruation, their bodies become mature and fruitful and the sexual urge begins and intensifies . Furthermore, adolescent's cognitive ability is marked by concrete thinking wherein long term implications of actions are not perceived . Adolescents also have elevated levels of egocentrism, which refers to a state of heightened self-consciousness as well as elevated levels of attention-seeking behavior as they attempt to be noticed or visible and consider themselves unique which is all linked to elevated risk-taking behavior like drug use and suicides . Psychologically, adolescence is the period of identity formation, integration, and commitment whereby adolescents conform more to peer pressure in their quest to uncover who they are outside their parents, thus making them more prone to risk adoption . The subject of risk-taking behavior among the youth of Swaziland where the study was conducted bears no differences in relation to global trends. The United Nations Population Fund-Swaziland reported that almost 30 and 20% of out of school and in school youth, respectively, reported to be taking alcohol , had engaged in sexual intercourse coupled with low levels of condom use at first sex and 45% reported early childbearing experiences increasing teenage pregnancy. The high HIV mortality rates which take the parents who are the most productive members of society thus leaving many adolescents to grow up in child-headed families, prone to exploitation, sexual abuse, poverty, and unwanted pregnancies greatly contribute to this phenomenon. These high HIV mortality rates in Swaziland disintegrate the family structure and takes away the opportunity of growing in the traditional twoparented family for many young people, which allegedly possess qualities such as parental warmth and guidance which act as a buffer to risk-adoption such as self-harm behavior , sex work , and substance abuse. Alcohol and other drugs use among adolescents remains one of this age cohort prominent risk-taking behavior which has been associated with social issues like crime, including sexual and grievous bodily harm, assaults and murder , gang activities, vandalism, bullying, and truancy within school premises . Additionally, alcohol is alleged to increases the risk of engagement in sexual risk behavior, leading to sexually transmitted infections and unplanned pregnancy which in most cases lead to unsafe abortions which are the leading cause of death for women aged 15-19 worldwide . There is also a link between substance use and self-harm behavior in adolescents with female adolescents being three times more likely and male adolescents being 17 times more likely to attempt self-harm while under the influence of alcohol . It is against this background that the researchers undertook the study to explore the association of family structure, history of --- MATERIALS AND METHODS --- --- Procedure Permission to conduct the study was firstly obtained from the ethics committee of the North-West University . Further permission was obtained from the Ministry of education in Swaziland and from the principals of the different participatory schools, which also included permission from the subject teachers whose time will be utilized for data collection. Learners were provided with the study information by the researcher and their teachers. They were given letters that contained the study information and requests for permission to give to their parents or guardians. In the letters parents or guardians were asked to sign consent forms indicating that they allow their children to take part in this study. Only upon receipt of signed consent forms from parents, learners were requested to sign assent forms. The information that was provided to the learners before signing the assent forms included indicating to the participants that participation is on voluntary bases and they are free to withdraw from the study at any given time if they see the need to do so. Participants were assured that a high degree of privacy and confidentiality will be maintained and participation is anonymous. The participants were given a brief guideline on how the questionnaires are to be completed. The questionnaires included written instructions on how questions are to be answered and that there is no right or wrong answer. Questionnaires were completed by the learners in the presence of the researcher for any clarification purposes during the designated times. These questionnaires were in paper and pencil format. Participants completed these questionnaires during the free period, not interrupting with class lessons. --- MEASURES Biographic Information A questionnaire has a section with demographic information of participants such as their age, gender, race, location of the school, and grade. --- Family Structure Family structure is defined as a group consisting of parents and their children or any other person related by blood or marriage . Family structure was measured through assessment questions inquiring about the number of biological parents one has or is living with (both/one/none as informed by literature. --- Childhood Trauma Questionnaire Childhood trauma is defined as experiences of abuse , neglect, and household dysfunction of varying frequency, severity, and duration before the age of 18 . Childhood trauma was measured using the Childhood Trauma Questionnaire which is a 28-item self-report measure designed to assess five types of negative childhood experiences: emotional neglect, emotional abuse, physical neglect, physical abuse, and sexual abuse . Three additional items assess tendencies of respondents to minimize or deny abuse experiences. Respondents rate the truth of each statement on a 1-5 scale point Likert scale with responses ranging from 0 to 5 . The CTQ has demonstrated reliability and validity, including test-retest reliability coefficients ranging from 0.79 to 0.86 over an average of 4 months, internal consistency reliability coefficients ranging from a median of 0.66 to a median of 0.92 across a range of samples . This scale has not been used in Swaziland before, thus a pilot study was first conducted to validate its reliability in the Swazi population. A Cronbach's alpha coefficient of 0.77 was obtained for the scale in the pilot study. The Risk-Taking and Self-Harm Inventory for Adolescents Measure Risk-taking behavior refers to the tendency to engage in behavior that has the potential to be harmful or dangerous . The RTSHIA was adopted to assess risk-taking behavior among adolescents in Swaziland. This is a self-report measure designed to assess adolescent risk-taking and self-harm . The scale consists of 38 items which are rated on a four-point Likert scale from 0 = Never if the statement does not apply to one to 3 = Many times if so. The RTSHIA has high reliability for both components with Cronbach's alphas ranging 0.85 and 0.93 & 87 and 0.90 . This scale has not been used in Swaziland before, and as a result, a pilot study for this current study with a smaller sample size yielding a Cronbach's alpha coefficient of 0.81 for the risk-taking sub-scale and a Cronbach's alpha of 0.84 for the self-harm subscale was conducted before the main study. --- Statistical Analysis The IBM Statistical Package for the Social Sciences was used to analyze the data. The one-way Analysis of variance was used to examine if there were any observed associations in risk-taking behavior between adolescents from single-parent or child-headed households from those from twoparent households. Additionally, a post-hoc test was used for additional exploration of the differences between group means. The Independent samples t-test was also used to examine if there were significant differences between adolescents with a history of childhood trauma and those without history of childhood trauma concerning risk-taking behavior. All tests held a statistical significance at p < 0.05. --- RESULTS --- Sample Characteristics --- Family Structure and Risk-Taking Behavior The type of family structure significantly influence risktaking behavior among adolescents [F = 5.481; p < 0.004]. Adolescents from child-headed and single-parent families reported higher risk-taking and self-harm behaviors. Subsequently, there was a need to compute a post-hoc multiple comparisons since there was an interaction influence of the type of family on risk-taking behavior of adolescents. The post-hoc result in Table 2 showed that adolescents from child-headed families significantly reported higher risk-taking behavior than those from single-parent family and those from two-parent family . --- Childhood Trauma and Risk-Taking Behavior The result in Table 3 shows there is a significant statistical difference between adolescents with history of childhood adversity and those without history of childhood trauma concerning risk-taking behavior t = 3.409, p < 0.001. Adolescents with a history of childhood adversity significantly reported higher risk-taking behaviors than those without a history of childhood trauma . --- DISCUSSION This study aimed to explore the association of family structure and history of childhood traumaconcerning risk-taking behavior among a sample of 470 adolescents in Swaziland. The study revealed that adolescents from child-headed families, followed by those from single-parented families engaged in higher risktaking behavior than those from two-parented families. These results coincide with past literature. For example, Kheswa and Tikimana concluded that due to the death of parental figures mostly as a result of the HIV/AIDS epidemic, many adolescents are left in child-headed households and tend to engage in risktaking behavior such as unhealthy sexual practices and alcohol abuse. Additionally, in Swaziland, it has been uncovered that one of the underlying factors promoting alcohol and substance usage is the lack of parental guidance due to the high HIV mortality rates leaving most children parentless thus more prone to risk adoption . This can be explained by the fact that parents are said to serve as a source of external monitoring throughout childhood and adolescence . In the early years, parental monitoring is necessary as a source of protection for children . But even as children mature, high levels of parental monitoring remain an important factor that predicts adolescent health behaviors such as drug use, as well as other behavioral problems . Family members are said to form the foundation for close, important relationships throughout childhood and adolescence, which act as a protective barrier against risk adoption . However, the pattern of interaction is such that an association between lack of protective resources like parental figures and poorer functioning is most clearly evident for youth with relatively low assessed exposure to adversity about risk . However, a study by Foster defies this finding as they uncovered that most children from child-headed families in Africa are still being cared for by members of their extended family, called the traditional safety net for orphans, which serve as a protective barrier against vulnerability to risk adoption. Additionally, this study uncovered that a greater percentage of the participants live in single-parented households, mostly with their biological mothers, and was the second-largest group prone to risk adoption. Bird et al. concurs with these findings as it was found that Mexican adolescents from single-parented families were 2.0 times more likely to be current smokers and experienced less or even careless supervision than those from two-parented households. Single mothers experience greater parenting stress and have less time and assistance in supervising children, as well as less time to develop and maintain the supportive bonds that expressively control children . However, a study by Zisk et al. views that throughout childhood, adolescence, and even into the college years, parents, primarily mothers, remain the most frequently identified primary attachment, nurturing, and protective figure for youth. Contrary to that finding, other studies view that the absence of an important potential source of guidance, nurturance, and support , can increase the likelihood of both substance use and violence-related behavior among youth . This indicates that not having a parental figure regardless of mother or father predisposed youth to risk-taking behavior. Also, "risky families" are said to being more likely to have children with disruptions in stressresponsive biological systems, poorer health behaviors . The results of the study further indicated that adolescents from two-parent households reported lesser risk-taking behavior compared to other family structures. These findings are validated by the works of previous writers who documented that adolescents from such households reported a delay or reduction of engagement in sexual activity ; reduced levels of self-harm compared to those from the stepparent, single-parented or noparent groups and receive higher levels of parental closeness and monitoring, dimensions of a family structure which act as a buffer against the adoption of risk-taking behavior adolescents . Moreover, emerging data suggest that during childhood and adolescence, close family relationships can ameliorate the impact that adversity has on lifespan physical and mental health . The study findings indicates that adolescents who reported history of childhood trauma have been engaging in risktaking behavior. These findings coincide with arguments from criminological theories such as general strain theory , which holds that recent adversities are more likely to be associated with maladaptive behavior such as delinquency, substance abuse, or criminal behavior because victims are using these methods to cope with traumatic stress. Furthermore, several past and present studies validate the study results that exposure to adverse childhood experiences subsequently leads to the adoption of smoking and early alcohol use among adolescents . Also, an alarming rate of suicide attempts and non-suicidal self-harm is attributed to early childhood trauma . The experience of adversity in early life is also found to be associated with increased risk for earlier onset of any physical disease, and any emotional, nervous, or psychiatric disorder especially in later life, which shows the lasting legacy of childhood adversity for not only maladaptive behavior but also disease risk in later life . Childhood adversity may also affect the child's ability to connect with school, which is a critical influence in an adolescent's development . In turn, failure to bond with the school could increase the risk for deviant behavior and psychological distress . The association between history childhood adversity and risk-taking behavior can be understood as a result of psychosocial negative experiences at both micro-and macro-level, consisting of negative caregiving environment , family context , community environment , and societal environment . Exposure to such adverse events or environments in childhood is said to be particularly harmful as early childhood is an exceptionally salient period for further development of psychological well-being . --- CONCLUSION This study shows that family structure and history of childhood trauma play a role in adolescents' engagement in risk-taking and self-harm behavior. Researchers, therefore, recommend that atrisk adolescents, i.e., those who come from child-headed and single-parent families, and those with a history of childhood trauma receive intervention before engaging in risk-taking behavior. Some life skills training programs be included in the school curriculum to help empower leaners in general. --- STRENGTHS AND LIMITATIONS The unique sample as it included adolescents from different schools with different population characteristics serves as a strength for this study. The scales of the research were used for the first time in Swaziland and they were found to be valid and reliable which can enable other researchers to use this as a point of reference in future studies. However, due to the cross-sectional nature of the study, causality cannot be inferred and the findings cannot be generalized to the whole Swaziland. Additionally, given that the participants were sampled using a simple random sample, the researcher lacked available knowledge concerning the population, as such, no control over extraneous factors. The measures were self-report in nature. Additionally, cluster sampling was used to divide the school population into more manageable units or clusters which might have not been an accurate representation of the entire population. Data from the primary care-givers would have been beneficial to further understand the family dynamics. --- IMPLICATIONS FOR FUTURE RESEARCH The findings of this study indicate that, future studies are necessary to understand the dynamics of risk-taking behavior among adolescents. Future studies can explore the role played by parenting styles, poverty, and exposure to violence in risktaking behavior. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by North-West University Ethics Committee. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. --- --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Copyright © 2020 Maepa and Ntshalintshali. This is an open-access article distributed under the terms of the Creative Commons Attribution License . The use, distribution or reproduction in other forums is permitted, provided the original author and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Introduction: Risk-taking and self-harm behavior among adolescent are a global challenge. This study explored family structure and history of childhood trauma and their association with risk-taking and self-harm behaviors among adolescents in Swaziland. Methods: Using a cross-sectional design, a sample of 470 male and female adolescents were sampled through simple random sampling from selected high schools in Swaziland. They completed a questionnaire assessing family structure, history of childhood trauma, and risk-taking and self-harm behaviors. Analysis of variance and t-test were used to analyze the results.The findings revealed that family structure significantly influence risk-taking and self-harm behavior among adolescents [F (2,247) = 5.481; P < 0.004] those from child-headed and single-parent households reported higher risk-taking and self-harm behaviors. The results also revealed adolescents history of childhood trauma to be more risk-takers than those without history of childhood trauma t (468) = 3.409, p < 0.001. Discussion: Study results suggest that family structure and history of childhood trauma have significant association with adolescents' risk-taking and self-harm behaviors.
INTRODUCTION Humans are more or less social depending on personal circumstances; numerous studies have demonstrated the health and subjective well-being benefits of having a strong and active social network. [1][2][3] In the era of instant communication, this can be extended to the virtual world, and thus, participation in Online Social Networks also has the same effect. [4,5] Health problems, especially chronic diseases, are a circumstance that usually change how a person relates to others, [6,7] influencing their social capital. [8] From these new needs, and because clinicians are becoming more involved in social media, [9] there are many OSNs dedicated to health that have actual impact on patients. [10] These Health Social Networks can be thematic , sometimes using broad public [11] or generalist OSN platforms. In any case, they usually share similar objectives and benefits, behaving as: a point of contact between patients and health professionals, a data collection resource for companies and/or researchers, a tool for patient tracking and rehabilitation, a means of increasing interaction with others, an instrument for creating awareness and prevention, a source of health information, and public health surveillance with the potential to influence health policies, among other implications. Many reviews in the literature have analyzed the existing HSNs: Moorhead [12] concludes that the main identified benefits are: increased interactions with others that could fulfill practical and emotional needs; more available, shared, and tailored information; increased accessibility and wider access to health information; peer/social/emotional support; public health surveillance; and the potential to influence health policy. Such studies have also targeted a number of HSN limitations and unmet needs, which are summarized below and grouped by six areas recommended in the literature : · Privacy, security, and transparency. HSNs are vulnerable to risks arising from sharing information online and their consequences for confidentiality and privacy, such as: low accessibility of privacy policies, communication and control of privacy risks, lack of user control over personal data-sharing, risks of centralized sharing of user data, lack of user education in maintaining confidentiality and privacy, and lack of information on the use of credentialed moderators. Furthermore, the openness and transparency of HSNs, especially in relation to commercial content and commercial users, is not guaranteed. · Validity assessment. Traditionally, HSNs have limitations on conducting controlled trials to determine their relative effectiveness and longer-term impact on: supporting the patient-health professional relationship; enhancing general public, patient, and health professional interpersonal communication; leading to behavior changes for healthy lifestyles; and evaluating the impact of online support interventions. · Design methodologies. User empowerment, design features, interactivity, and awareness of social context are needed in the informatics systems designed for patients, their families, and their communities. Furthermore, social media has limited impact for health communication in population groups with special needs , and it is difficult to find facilitators for a self-managed health website. · System ecology. It is necessary to engage key stakeholders , target community-wide outcomes and participation of local community groups , employ ecological systems theory and the principles of Community-Based Participatory Research , address the interdependence between online and real-world support , and address a person's existing social networks . · Quality of Service . HSNs need a patient-centered perspective in presenting content and information to guarantee timely and personalized care. It would be convenient to conduct periodic external reviews of member discussions to avoid misinformation, providing effective moderation support. Furthermore, the veracity of the information is essential to ensure that the contents correspond with the professional recommendations. · Technology enhancement. HSNs may be used in a synergistic manner with personal health records, smart devices, and through more sophisticated and emerging interactive tools. --- Design methodologies Need for user empowerment, interactivity, and awareness of social context Weiss, [17] Orizio, [13] Nambisan, [18] Huang, [19] Al-Kadi. [16] Limited impact of social media for health communication in special needs population Moorhead, [12] Nambisan. [18] Barriers to using a self-management health website Yu. [20] --- System ecology Need to engage key stakeholders to balance autonomy and community ownership Weitzman. [15] Need for target community-wide outcomes and local community group participation Weiss. [17] Need to employ ecological systems theory and CBPR Weiss. [17] Need to address the interdependence between online support and real-world support Wellman. [21] Need to address a person's existing social networks Weiss. [17] --- Quality of Service Lack of patient-centered perspective in presenting content and information Yu. [20] Need to enlist periodic external review of samples of member discussions Weitzman, [15] Al-Kadi. [16] Lack of truthful information and timely and personalized care Yu. [20] Alignment of content with science and professional practice recommendations Weitzman. [15] --- Technology enhancement Need for integration with personal health records and mobile devices Laranjo, [10] Al-Kadi. [16] Need for the development of more sophisticated and emerging interactive tools Huang. [19] The objective of this work is to translate these limitations and unmet needs into challenges by contributing the design, development, and assessment of a new concept: a micro ad hoc HSN , to create a socialbased solution for supporting patients with chronic diseases. Background and Materials and Methods sections detail the research carried out in the design process of the proposed methodology, how the uHSN was evaluated, what quantitative and qualitative measures were obtained, and finally, how it was tested for use in practice and was replicated by the scientific community in its research. Results section reports in-depth analysis of the obtained results from quantitative, descriptive, and interpretative perspectives. Finally, Discussion section contains a critical discussion that supports our conclusions and proposes further studies. --- BACKGROUND. Design and implementation From the challenges posed above, we formulated the following design criteria: · Privacy, security, and transparency: to ensure the security of personal information, including specific modules of privacy, confidentiality, transparency, and authentication. · Validity assessment: to create an entire validation methodology prior to the implementation stage to guarantee user needs, and encompass all user profiles. · Design methodologies: to focus on user empowerment with awareness of social context, enhancing interactivity and selfmanagement. · System ecology: to build an architecture that includes all involved profiles with their contexts and relationships. · QoS: to incorporate mechanisms for timely and personalized care, guaranteeing truthful information through scientific validation by users with specialist profiles. · Technology enhancement: to integrate software modules that allow the implementation of all the proposals into multi-platform solutions compatible with smart devices, interactive tools, and health information systems. The first approach to consider when developing the uHSN was to work with an existing social network, such as Facebook or Google Plus. [11] However, according to our design criteria, this was not suitable, as it involved remarkable limitations related to the connection of hardware devices, permissions, functionality limitations, privacy, and accessibility. Thus, fulfilling the previously proposed design criteria, the uHSN was supported on a pluginbased architecture by Elgg platform. It follows a Model-View-Controller pattern and allows the development of custom plug-ins and the application of a Graphical User Interface using Cascading Style Sheets markup. These decisions were justified because Elgg platform includes specific modules to provide many of the raised functionalities, such as: confidentiality, transparency, and authentication modules to guarantee privacy; messaging, blog, and discussion modules to guarantee user empowerment and interactive relationships among profiles; and validation and multi-user modules to guarantee truthful information. In addition, Elgg architecture allows the easy development and integration of custom modules to create new, self-developed functionalities. Indeed, Elgg platform hosts a rich development community, which provides additional modules that can be used to extend the uHSN functionalities. Furthermore, this technological proposal is innovative and fully compatible with smart devices, interactive tools, and Health Information Systems , as shown in the following modules : a Health Level 7 module, which allows the exchange of HL7 messages with external entities through the Mirth Connect middleware and Electronic Health Record access; [22] a Sharable Content Object Reference Model module, which allows access to SCORM courses on the SCORM Cloud online learning platform; [23] a Devices module, which allows data registration from user monitoring devices, such as the Zephyr heart rate monitor; [24] and a Treatments module, which allows the definition and assignation of custom therapies to users, among others. In addition, the proposed Elgg platform provides a Representational State Transfer -based web service Application Programming Interface , which allows external third-party agents to access the platform and specifically facilitates the monitoring devices connector and a memory-like game used in some therapies. Moreover, the visual style of the platform uses a Responsive Design approach, [25] which ensures proper visualization on devices of any size. The novelty of the technology involved, and the lack of references in this area, warranted this proposed flexible design process. Furthermore, we faced the challenge of designing an ad hoc interaction map and GUI, and finally developing our own system platform. Figure 2 represents the stakeholders included as users, distributed in two conceptual interaction areas: the "backstage" and the "onstage" . Inside these two interaction areas and transverse to them, eight network spaces segmenting specific groups of interaction with different objectives and subjects were considered. These network spaces are based on the interaction groups in the real world-in line with the recommendations of Weiss et al.- [17] and extend to providing the benefit of Information and Communication Technologies ; they also constitute the central concept of community, an ad hoc methodology oriented to uHSN design. Each of these network spaces is implemented in the uHSN through a group, which behaves as a restricted access area, and content related to the network space is available only to the members of said group. Figure 3 shows an example with three proposed groups: direct attention of a patient , direct care professionals, and all patients. For any group, an administrator should be selected, and be responsible for deciding which users can access each group and for defining which uHSN components are enabled for each group . Some of these components can also be used outside the context of any group, and the user can define the access level for the related content. For example, a patient can upload some images and share them with friends. Other components, such as messaging or the user profile, are not related with groups but with the users directly. Another important aspect of the platform that differs from mainstream uHSNs is profile customization. The uHSN functionality depends on the user profile . Thus, we developed a customized module to determine what platform components are presented to each user according to their profile, mostly overriding the Elgg default view-generation mechanisms. Finally, it is important to highlight two contributions. First, this proposed project methodology, platform architecture, and design of network spaces is an open contribution to the scientific community for replication in its research studies, with the required adaptation to every health context, even in environments with highly heterogeneous profiles. Furthermore, with a suitable data-mining process, every platform module allows the measurement of diverse qualitative and quantitative data, such as the number of images updated, viewing frequency and user profiles, the inter-relationships among these profiles, and the messages and discussions generated. An example of these data of interest is analysis of the interaction level between profiles. Coordination of association procedures 3 --- Direct attention management Design of patient's attention plans, patient organization and tracking, etc. 4 --- Departments Some therapists have specific specialties such as psychology, occupational therapy, physiotherapy, etc. This group is mainly used to design specific therapies. --- 5 --- Patient's relatives N private groups for each patient with limited cognitive capacities, where coordinators inform relatives about health issues, periodic reports, etc. Also, relatives inform about medication changes and other relevant issues 6 --- Direct attention execution N private groups for each patient without limited cognitive capacities, where therapists provide periodic reports, send homework, etc. Also, patients inform about medication issues and other health-related questions. 7 --- Patients Private group similar to facebook , where patients share personal information and pictures. 8 --- Association Inform all associates about events, publish pictures from events, provide a discussion channel, provide notices, etc. --- MATERIALS AND METHODS From the previously proposed design criteria, the present work initially followed a strategy built in CBPR to achieve the uHSN paradigm. This methodology is specifically recommended for designing, implementing, and evaluating social support in online communities, networks, and groups. [13][40] Additionally, due to the complexity of profiles and relationships, we used a set of user-oriented methodologies such as Cultural Probes, [14] Personas, [15] Blueprint, [16], and Wizard of Oz. [17] The project ecosystem included interdisciplinary professional profiles working within scenarios of high user profile heterogeneity, and the complex relationships among them: patients, relatives, carers, therapists, etc. From this conceptual scope, the specific contributions presented in the present paper are the result of the assessment of the proposed uHSN in two health communities related to Parkinson disease and Stroke. The former was coordinated by a university, and involved three departments , a Parkinson patient association involving five professional profiles , and a software company that contributed a customized development in this scenario. The Stroke community was connected to a hospital involving clinicians, therapists, and technical staff, a second university, and two companies that worked on customized development and design tasks. Every health community shared a general assessment design according to the Xassess evaluation framework, [30] articulated as multiple case studies and carried out in a CBPR. Xassess has several advantages, e.g., it has a structured reference framework that is sufficiently flexible to allow tool and strategy adaptations for each scenario; it also has generic tools for establishing a common ground and for guiding professionals who are not experts in evaluation. Furthermore, both communities shared a universal team of software developers to provide communities with unlimited use of the uHSN during the assessment. The present work contributes the design of a balanced evaluation, including some methods that cover three key aspects: to consider a multi-referential and integrated perspective ; to fit the nature of key information ; and to assess four main objectives: user acceptance of the uHSN, productivity improvement, QoS enhancement, and the fostering of social relations. Table 2 shows the indicators, methods, and findings associated with every assessment objective. For each method used, our approach was adapted to the indicator and the strategy of combination with other methods, taking into account: the user profile's feasibility, applicability, and adaptation to each specific scenario, and the objective of evaluation. Following the use of Xassess, we determined the key evaluation factors: purpose and objectives, agents and scenarios, and methodological approach for each indicator; for each indicator, we assessed the singular demands of each user profile, and budget, time, and personnel constraints, and we ensured that each evaluation dimension was covered under the combination of quantitative and qualitative methods, providing different perspectives of the same reality:  Data logs provided information on two levels: Raw data analysis of 17,300 log records that segmented data according profile, type, categories, and user relationship by matching the cross-relations, using visual and interpretable trends to provide quantitative information on who , when, and for what the uHSN was used. On another level, and as a complement of quantitative data, a content analysis was carried out under the lens of ethnographic research, and confirmed real and meaningful use, and effective user interaction.  Surveys included several closed and open items and Likert scales to obtain perceptions in the four assessment objectives: 1. User acceptance of the uHSN : How often did I use the platform? How complicated has the learning about how to use the platform been? How did I liked the documentation for the platform? How did I like the visual/functional design of the platform? How would I define the different functions of the platform? The first three questions are direct references, while the last two are indirect questions of a more qualitative nature; in this case, the objective was to triangulate the results of both blocks with the aim of reducing effects such as Hawthorne Effect. [42] [43] 2. Productivity improvement : How often did I use messaging to communicate with patients? How much time did I use for social health issues? and for platform administration? 3. QoS enhancement : [for patients] Did the platform improve the quality of the information I have about my illness? Was the information I obtained credible? Was the information I obtained relevant? Did I obtain the information I needed quickly? Did the platform improve the management of my medication? Did the platform ease the performance of therapies at home? Did the platform establish new communication channels between me and professionals? [for professionals] Did the platform improve the management of the patient's history? Did the platform establish new discussion channels to discuss patient intervention? Did the platform improve the management of patient medication? Was the information I obtained credible? Was the information I obtained relevant? Did I obtain the information I needed quickly? Did the platform establish new communication channels between me and patients? Did the platform increase the performance of patient therapies at home? 4. Fostering of social relations : How often did I exchange messages with other patients? How often did I participate in the forum, contribute to image galleries, or leave comments? Do I believe that private forums, discussion groups, image galleries are positive for me? How do I feel about the relationship between patients and professionals? Do I think that having forums, discussion groups, image galleries, etc. with professionals improves their attention / our relationship? How do I feel about the relationship with other people in my situation? How do I empathize with other members of the community? Do I think that the other members of the community were sincere? Would I be willing to provide support to other users? Did I find users with whom I could share joys and sorrows? Did I talk about my problems with the disease with other users? Did the members of this community behave like me, think like me, or have a health situation similar to mine?  Interviews and focus groups were conducted by the same experienced researcher, following a two main block structure: a general phase , followed by individual discussions on the specific topics of every user profile. The interviews and focus groups were interrelated: the focus groups were carried out and analyzed first, subsequently specific participants were selected to attend indepth interviews on certain aspects. The criterion for the interview selection was the communicative and critical capacity the participants had demonstrated in the focus groups, with slight nuances in the professional profile, where selection was largely determined by their availability in terms of time. All interactions were recorded and later analyzed by two researchers with expertise in health and user interaction specialist with ethnographic methodologies expertise) to build consensus on two key stages: initial narratives to share anecdotes and experiences about the use and adoption of uHSN with the community; and explanations and details of the more relevant key issues to remark on the functionalities, utilities, milestones, and positive changes the uHSN contributes to each user profile. The individual discussions differentiated between patients and health professionals. For patients, the interviews included questions and comments such as: What have you missed in the platform? Any fear or concern? to obtain specific information about their privacy, confidentiality, and integrity, among other topics of interest. For health professionals, the interviews included questions and comments such as: The most difficult task you have found on the platform has been... For mass use of the platform, it is necessary... to obtain specific information on obstacles/resistances, productivity impact, efficiency, and efficacy, among other topics of interest. The project's complexity and the different scenario locations clearly influenced the method selection, as some experiments would not be performed by the same person, and the evaluators had differing expertise. Therefore, the criterion was to select more traditional elimination methods, where all evaluators would have previous experience. To understand the methods selection, it is also important to consider the character of "final evaluation," which had been preceded by other product evaluations in previous iterations. In these intermediate evaluations, the selected methods combined assessment with that of innovation, and were more open to emerging issues. In this case, the methods focused more on validating the usefulness of evaluation as a validation and contribution tool. For example, past iterations qualitatively evaluated usability with prototypes using the Wizard of Oz method; [17] evaluators were required to undergo a training course, and we constructed a specific follow-up of the interpretations and results. In the case presented here, i.e., the final evaluation, usability was validated with the mixed-methods approach but with a more concise or quantitative approach, providing more in-depth survey results through the interviews, but with certainty, as provided by the results of the most recent prototypes designed and redesigned in the prior iterations. --- RESULTS In the present study, we focused on two health communities -Parkinson and Stroke-with different idiosyncrasies , professionals and patient profiles . Based on the main assessment objectives described in the Materials and Methods section, Table 3 presents the key conclusions of the assessments, detailed according to each indicator. It should to be noted that some indicators serve or inform diverse evaluation objectives, so there is high interrelation between the conclusions drawn for each point: --- a. User acceptance of the uHSN  Use of uHSN and content created: All user profiles participated in a balanced manner in the uHSN . Figures 567show that they used all functionalities of the uHSN during the assessment duration: patients personalized their profiles, messaged other users, participated in forums, shared materials and videos, and connected to video conferencing; some patients even managed groups. Remarkably, the patients were quite active and professionals interacted actively . Professionals messaged patients, monitored their progress, supported the platform, and checked users' usage: one of the most relevant activities was the uploading of therapeutic content as the initial database . ---  uHSN accessibility and usability: The data we collected on the network access provider and time of day showed that the uHSN was used both from home and from outside the centers ; among other things , the data demonstrate that the uHSN was used without professional supervision. There was balanced distribution among all activity types : mainly medical activity and related to content . Although accessibility and usability were positively assessed with high scores in the intermediate iterations, both communities felt that specific steps in the interface could be improved, e.g., activities such as video uploading, in a more streamlined and direct manner. Nonetheless, the surveys, interviews, and focus groups revealed that the perception of usability is very positive; we highlight this also because most respondents considered the uHSN design aesthetically pleasing. ---  Effort needed to learn to use the uHSN: The surveys, interviews, and focus groups revealed that the Stroke community considered the learning process easy vs. the Parkinson collective, which thought that this aspect could be improved upon . The triangulation of these results with Figures 67, and some interview answers, demonstrated autonomous and continuous use outside the centers . We consider that this shows that learning was sufficient to allow users to use the platform. ---  User attitudes towards uHSN and emergent issues: The surveys, interviews, and focus groups confirmed global positive outcomes of uHSN utility in professional management , with a high level of motivation achieved during the 6 months evaluation period. This is aligned with users notable efforts in content generation; uHSN use during non-working hours ; and the discovery and execution of new, unforeseen utilities . ---  Access to the uHSN: To interpret these results properly, it is necessary to understand the projects' ecosystem. The platform was a new resource available to the professionals and they were using it voluntarily; their habitual work duties did not reduce and platform interaction had to happen outside working hours. Figure 6 shows that there was high activity in evaluation period , especially in content creation during the first month. The visualization of the hourly access together with users' feedback, indicates professional's' commitment with the platform use, as it shows that the uHSN was used throughout the day: traffic increased just before the centers opened and just before lunch time. Figure 7 shows that, besides remote use, the uHSN was also used to complement the daily in-person activities, strengthening the traditional health processes in an interactive and online manner .  User interaction through uHSN messaging: Data related to messaging showed very active use: professionals sent 60% of messages and patients interacted 40% through the uHSN . Clearly, the uHSN users relied more on the content uploaded by the professionals, which achieved a greater level of interaction. Furthermore, the professionals were more interactive than patients: each professional reached 100% of users on both profile types and each patient reached 62.5% of users on both profiles types, which is drawn as the interaction ratio in the right axis in Figure 9.  Use of paper: The surveys and interviews suggest the great potential of the uHSN for improving this indicator. During uHSN use, professionals estimated a 45% reduction in paper use: this would affect not only economic and ecological issues, but also the time devoted to generating and managing paper. The relatives complimented this idea, noting the convenience of being able to access information at a given moment without needing to rely on a caregiver's handwritten notes, of being able to be at home to receive letters, and the fact that information would not be lost. ---  Administration time required to attend to staff issues: The interviews and focus groups showed very good predisposition of the administrative staff in adopting the uHSN as a daily framework. Success has to overcome the risk involved in the fact that communities may not have sufficient resources to address an organizational change in the service by and through the uHSN . A very interesting aspect that emerged is that the professionals put into practice unexpected uHSN uses, using it also as a repository for classifying the information a carer must know when working at the center; a section of the uHSN was used as a welcome manual all employees were required to read: upon the arrival of a new employee, time was saved explaining the basics of the work and the center).  Health professionals' time devoted to using the uHSN: As with staff, the results showed that health professionals had a very good predisposition in this regard. As stated earlier, and in general, change is a risk because it requires political transformation in communities, but unavoidably, these transformations shall be considered an investment and one of the most important aspects for capitalizing on the potential of the tool. For example, it was clearly reflected in the task of managing patient treatment, a very time-consuming task: all professionals highly valued the uHSN capacity for managing therapies, with options for programming multiple therapy assignations , thereby saving time and resources and improving productivity. -7) demonstrated that new communication channels were created and maintained through the uHSN. This was confirmed by all users in the surveys, interviews, and focus groups acknowledging that the uHSN enhances personalized assistance and care , and recognizing its enormous communicative potential for follow-up intervention; one of the most important product values, together with fostering social relations, is remote social and emotional support.  Improved patient medication management: The surveys, interviews, and focus groups concluded that remote therapies imply a great effect, but professionals' implicit perceptions may drive the retaining and maintenance of direct intervention and care . For example, the professionals were prudent and cautious when evaluating possibilities for improving a patient's medication management and information that patients and relatives may have about the disease.  Improved management of patient history: Patient follow-up is a key issue in recovery and disease control, and the professionals were clear about the utility of the uHSN in their professional management. The surveys, interviews, and focus groups deemed the uHSN useful for improving the management of patient history and any patient information .  Increased rate of remote therapies followed per patient: Patient perseverance and doing therapies at home is another key issue in recovery and disease control, and we verified that users accessed treatments from outside the centers . As revealed in the interviews and in Figure 7, patients accessed the uHSN before visiting the center, and the uHSN helped them exercise at home after the consultation. It should be taken into account that, as shown in the focus groups, new formats and languages are required for creating certain online therapeutic content , which can constitute interesting new lines of research. Of course, and as already mentioned, the tool is not meant to substitute face-to-face therapy, but to complement it.  New discussion channels for preparing patient intervention: To date, professionals' discussion channels were mainly face to face meetings; evaluation revealed remarkable online inter-professional communication. Indeed, the number of interactions established among professionals was almost three times that of professional-patient interactions . As marked by the blue line , each professional interacted with all the other professionals and with 79% of patients: this does not mean that 21% of patients were ignored, but that other professionals attended to them. Following this interaction ratio, each patient interacted with 82% of professionals and with 21% of patients. This was due to many of the relationships being established by replicating the typical structure of organizations, i.e., there are usually more patients than professionals , and patients are often attended to by several professionals . This was reinforced by 63% of the survey answers, showing the clear contribution of the uHSN in this scope.  Emergent issues: The results showed very relevant scenarios in which direct attention is difficult, for example, delocalized patients or patients who cannot or do not want to visit the therapy centers due to emotional reasons. In these cases, the uHSN constitutes a big leap, not only in the quality of service but also in the access to therapy itself . Without losing sight of this clear contribution, it has to be taken into account that 55% of the interviewees recommended a progressive transition from direct to remote therapeutic assistance. The users participated in every type of relational flow, in order of relevance: "mixed" , "professional" , and "external" . Professionals were highly connected with each other and with patients, while patients are highly connected with professionals and less with each other . This is a good example of the need to improve the relational network of patients to increase their levels of interaction and improve their social relations. ---  Enhancement of social circles : The surveys, interviews, and focus groups found that this is the uHSN's most useful feature. Patients and professionals agreed that continued use of the uHSN enabled: the creation of new communication channels between patients and health professionals; and improvement in empathy, reciprocity, and affective companionship for assistance and disease care. The enhancement of social relationships affects the perceived empathy for living with disease and broadens social and personal circles to improve the perceived personalization assistance and confident care. Based on the evaluation experience, it should to be noted, as a recommendation for implementing the uHSN in other areas that professionals should plan on how to indirectly promote and involve non-professional users in the construction of their own social network, but avoid interfering as much as possible in this utility. Another option is limiting social relation external spaces to formal community activities and workshops.  Emergent issues: Patients were concerned about the confidentiality, integrity, and provision of information in the uHSN, especially in forums and private messages ; thus, services associated with uHSNs have to bear in mind this preoccupation to inform the user adequately of data safety. In this sense, there were some worries about the challenge of finding a means of encouraging the construction of social communities and avoiding overlapping and redundant use with other OSNs. Thus, although it may seem obvious to designers and developers, as has been stated, teams must inform the user or client of the great difference between uHSNs and mainstream social networks. Another very interesting emerging issue already mentioned briefly is the fact that the professionals discovered and enjoyed unexpected uses of the uHSN outside the work to that allowed them to increase their social cohesion and integrate new partners . --- DISCUSSION The new paradigm of uHSN offers a viable and open alternative to manage the support of patients and their wide context, that offers great developmental opportunities. The uHSN is thematic -related to a specific disease-; private and secure -restricted to the use of patients, relatives, and health professionals-; with specific objectives -around patient support-; small in size -from tens to hundreds of users-; supported by local therapeutic associations; capable of integrating innovative services -connection with sensors or interoperability with hospital information systems, among others-; based on predefined relationships governed by Network Groups; and designed through "Community" methodology. The research carried out has shown the capacity and future projection of the uHSN, especially to connect social and health worlds; to allow remote rehabilitation; to improve the efficiency of professionals; to strengthen or expand the patient's social environment; to improve the quality and immediacy of information; to promote social and emotional support among actors. In addition, the uHSN overcomes the main limitations of the social health networks that are described in the literature: quality, reliability, confidentiality and privacy. Given the heterogeneity of the scenarios, assessment methodologies, and user profiles involved, it is complicated to address in a single paper all the results, implications, and interrelationships the evaluation has revealed. To make answering the question easier, this discussion is organized thematically, differentiating between technical and methodological arguments, and including the six areas recommended in the literature they refer to: privacy, security and transparency, validity assessment, design methodologies, system ecology, QoS, and technology enhancement. Technical implications: Privacy, security and transparency, system ecology, QoS, and technology enhancement In this context, some key questions are: Why uHSN? What is its added value? Why not capitalize on established OSNs? The results obtained from the proposed assessments answer these questions and conclude with the following strong points of uHSNs: · Intensive possibilities of singularizing the flow and structure of social interactions, the virtualization and empowerment of patients' existing offline social networks, and the creation of new ones. The uHSN is sufficiently flexible to allow the creation of as many spaces as needed for every specific service requirement , confining the interactions related to that service into such a space and for the users accessing it. This led to a high level of connections and interaction ratios, as reflected in the results . · Predefined, known user roles and a sole administrator ensures the principles of therapeutic care: privacy, confidentiality, integrity, transparency, and provision of information. · As the uHSN has a controlled and known number of users and a univocal objective, it is easy for the institution to ensure information quality and moderation, that is, it does not constitute a great burden of extra work for the institution. · Personalized attention is highly valued, even indispensable, for users. The uHSN contributes to complementing the virtual and real worlds with 24/7 attention from therapists . The results have distinguished this aspect with the high degree of access to the uHSN: for example, just before the centers are opened to the public and just before lunch time, as shown in Figure 7. · The open and latest technology used allow the inclusion of innovative services, enhancing uHSN possibilities. We specifically implemented a responsive theme and a medication-effect tracking service that is the start-up page when a patient accesses the tool from a mobile device. Furthermore, we developed a module for receiving data and access to third-party data services . However, achieving these strong points demands interesting discussions on the following aspects: · Design flexibility as a key response to an interactive and dynamic conceptualization of needs. · Integrating new virtual social existences with existing social organizations is essential. Neither disease type nor patient age is a decisive factor; the assessments determined that the most important factor is how new virtual networks are created, taking into account preliminary and "real" social networks. Groups and institutions managing their activities without OSNs build their own background , which should be considered an entity with its own particularities and requirements. For example, fostering social relations is easier when they are carried out in small-scale health care associations instead of hospitals or other large institutions. Moreover, introducing other forms of interaction and communication, such as those performed in the uHSN as a complement to face-to-face therapy, as demonstrated in Figures 5 to 7, changes the entropic organization of social relations, adding new uses and forms of relationships. In conclusion, the assessment confirms that the format of the OSN is very important, and this includes how the OSN design is focused, how the user in their context is analyzed, and how the OSN concept is understood. · Individualizing and personalizing processes are essential in prospective and transferring terms. Although these processes are complex, the focus groups revealed the uHSN as an analyzer of organizational dynamics, questioning current uses and defining a turning point that implies not welldefined risks. Currently, care providers usually spend a significant amount of time manually viewing and writing notes, some of them never read; [31] uHSNs can make it easier, e.g., notes available online in blog format allow commenting and facilitate reading in down times. However, the uHSN requires from professionals a new concept of time scheduling, demanding new health content and other forms of social mediation adapted to new health care realities. --- Methodological implications: Validity assessment, design methodologies, and system ecology From a methodological perspective, and as a research strategy involving both health communities, CBRP was proven essential for meeting many key features of the uHSN: · Using iterative cycles of evolution, each with design, implementation, and evaluation phases, allowed the final system to evolve considerably and target real user needs. Besides greater product maturity, we considered 60% more functionality in the last iteration than in the first. · As expected, [32] participatory and action-oriented methodologies allowed patient-centered participatory solution design, which truly fitted what users needed, as demonstrated in the results obtained from the surveys, interviews, and mainly, focus groups. · uHSN principles such as CBPR should acknowledge a community as an entity, and need to build on the strengths and resources within the community. [33] The qualitative evaluation showed that having a local entity providing existing real-world support enabled real involvement of end users that consider the system theirs, and not a third-party product. The qualitative interpretation was consistent with the quantitative results, which showed high activity from the first day. Additionally, it allowed the complementing and strengthening of current procedures, easing platform adoption. · We used specific design methodologies to foster co-learning and capacity building among all project partners. [34] They allowed the creation of a common object world, [35] with unified objectives among all stakeholders ; the empathy and compromise of the technical staff with the project; and better connections among needs , design , and implementation ; in summary, a better final product. Our methodology also emphasizes the continuous evaluation of the collaborative process throughout the development of an intervention. Qualitative and quantitative evaluation of the participatory design process help establish the principles and best practices for developing community-based systems for online support. [36][37][38]. Both considerations allow the reinforcement of validity assessment. CBPR and engineering design facilitate intersubjective analysis in tandem, exploring the meaning and significance of people, user profiles, and cultural healthcare constructions. Beyond controlled trials as an exclusive methodological approach, our vision is sensitive to interpersonal relationships, interpersonal communication, behavior changes for healthy lifestyles, and the impact of any change in professional intervention. Immersion in context renders it possible to understand local and singular situations in terms of result credibility , dependency, and consistency . [39] --- CONCLUSIONS AND FURTHER RESEARCH This paper contributes the design, development, and assessment of a new concept: the uHSN, defining two interaction areas and the design of a new transverse concept of "network spaces segments" that provides timely interaction among all involved profiles and that guarantees qualitative relationships. As we have demonstrated, the uHSN overcomes the main limitations of traditional HSNs in the main areas recommended in the literature: privacy, security, transparency, system ecology, QoS, and technology enhancement. The research carried out with the proposed methodology contributes a complete, open, and modular platform that demonstrates its viability for use in all types of work areas; it also allows the scientific community to replicate the obtained results in very diverse environments with multidisciplinary professionals, and works in scenarios with ecosystems of heterogeneous user profiles. From a methodological perspective, combining CBPR and engineering design methodologies proves its usefulness in health projects. The proposed assessment processes are focused as a social-based solution for supporting patients with chronic disease in two real-life health scenarios: a Parkinson patient association and a Stroke rehabilitation service in a hospital. As main conclusions, the qualitative and quantitative findings demonstrate the following key points: · User acceptance of the uHSN, remarking not only on the viability, replicability, and future projection of uHSNs to connect health and social worlds, but also on the enhanced management of user profiles . · Improved productivity by optimizing efficiency, efficacy, and supporting distance rehabilitation, even with smart devices. · QoS enhancement by guaranteeing privacy, confidentiality, integrity, transparency, and provision of truthful information to all user profiles. · Fostered social relations by expanding users' social capital, improving quality and immediacy of information, and enhancing perceived peer/social/emotional support. As further research, it is necessary to work on transferring the uHSN to each health community and conducting an internal follow-up to assess its future sustainability and to continuously improve the platform. Some alternatives are: new collaboration between health communities and companies or universities, integrating third-party systems for importing and creating new therapeutic content, attracting funds and grants for developing related products. As a challenge, the scientific contributions of the present paper are the first step not only in customizing health solutions that empower patients, their families, and healthcare professionals, but also in transferring this new paradigm to other professional and social environments to create new opportunities.
Objective: To contribute the design, development, and assessment of a new concept: Micro ad hoc Health Social Networks (uHSN), to create a social-based solution for supporting patients with chronic disease. Design: After in-depth fieldwork and intensive co-design over a 4-year project following Community-Based Participatory Research (CBPR), this paper contributes a new paradigm of uHSN, defining two interaction areas (the "backstage", the sphere invisible to the final user, where processes that build services take place; and the "onstage", the visible part that includes the patients and relatives), and describes a new transversal concept, i.e., "network spaces segments," to provide timely interaction among all involved profiles and guaranteeing qualitative relationships. This proposal is applicable to any service design project and to all types of work areas; in the present work, it served as a social-based solution for supporting patients with chronic disease in two real-life health scenarios: a Parkinson disease patient association and a Stroke rehabilitation service in a hospital. These two scenarios included the following main features: thematic (related to the specific disease), private, and secure (only for the patient, relatives, healthcare professional, therapist, carer), with defined specific objectives (around patient support), small size (from tens to hundreds of users), ability to integrate innovative services (e.g., connection to hospital information service or to health sensors), supported by local therapeutic associations, and clustered with preconfigured relationships among users based in network groups. Measurements: Using a mixed qualitative and quantitative approach for 6 months, the performance of the uHSN was assessed in the two environments: a hospital rehabilitation unit working with Stroke patients, and a Parkinson disease association providing physiotherapy, occupational therapy, psychological support, speech therapy, and social services. We describe the proposed methods for evaluating the uHSN quantitatively and qualitatively, and how the scientific community can replicate and/or integrate this contribution in its research.The uHSN overcomes the main limitations of traditional HSNs in the main areas recommended in the literature: privacy, security, transparency, system ecology, Quality of Service (QoS), and technology enhancement. The qualitative and quantitative research demonstrated its viability and replicability in four key points: user acceptance, productivity improvement, QoS enhancement, and fostering of social relations. It also meets the expectation of connecting health and social worlds, supporting distance rehabilitation, improving professionals' efficiency, expanding users' social capital, improving information quality and immediacy, and enhancing perceived peer/social/emotional support. The scientific contributions of the present paper are the first step not only in customizing health solutions that empower patients, their families, and healthcare professionals, but also in transferring this new paradigm to other scientific, professional, and social environments to create new opportunities.
Background Conspiracy theories emerged as attempts to explain the eventual causes of substantial social, political, or healthrelated events and circumstances with claims to identify hidden patterns and the culprits behind events by influencers [1]. These theories tried to render the inexplicable explicable and often appeal to people who are uncomfortable with uncertainty and randomness, especially during a frightening time. Previous studies have demonstrated that during societal crises, conspiracy theories spreadlike wildfire [2,3]. The conspiracy realm appears to be an empty shell of sorts that aggregates. Of note, a conspiracy theory is distinct from a rumor which is a story of the unknown with suspicion validity as well as from an authentic conspiracy which is an actual causal chain of events [4]. The conspiracy theories grasp essentially their power from their believers and supporters [5]. Thus, powerful and influential actors called conspirators were not essentially individuals who owned a real sociopolitical authority; they could be even powerless people, such as ethnic minorities. Apart from the identity of conspirators, malicious intent is veiled behind these theories [6]. The conspiracy theories encompassed sets of ready-made narratives that can be useful in different events and situations, and that is why intersecting and repetitive statements are noticed. Several reasons could explain the reliance and the adherence of people to conspiracy theories [1,[7][8][9]. People may engage in conspiracy theories when epistemic, existential, and social needs are not fulfilled [3]. Epistemic need referred to people's motivation to maintain certainty, consistency, and accuracy in their understanding of the world [10,11]. The term "existential need" is used to describe people's need to feel safe, secure, and in control of their environment [12], while social need referred to people's drive to uphold a favorable and positive social image of themselves and the community to which they belong [13]. Notably, conspiracy theories pervaded several fields. For example, in the political world, the incorporation of conspiracy theories into propaganda was frequently used. The medical field was no exception and the belief in conspiratorial ideas is widespread [14]. Of note, the era of COVID-19 has created a sort of conspiracy playbook [15] that could serve as ingredients for any conspiracy theory that could be effectively and easily applied to other infectious diseases. This playbook is intended to spread misinformation and doubts, delegitimize public health officials and institutions, and stoke fear and vaccine hesitancy [16]. The ongoing multicountry outbreak of monkeypox in non-endemic countries brought to focus the issue of conspiracy theories regarding emerging virus infections [17] and COVID-19 has found a new contemporary companion in the health-related conspiracy theory realm. Following the COVID-19 playbook, MPX conspiracy theories started spreading online almost as soon as cases began to appear outside of sub-Saharan Africa [18][19][20] and social media outlets were hotbed by misinformation. Although several prevalent MPX-related conspiracy theories are evolving constantly, an evident cross-pollination between diverse conspiracy theories was revealed [21]. This common phenomenon was owing to the self-perpetuating network of beliefs, where narratives often overlapped and copied each other [22]. Notably, it was anticipated that these narratives could ultimately mature into a single overarching conspiracy theory [19]. Along with the rise of MPX cases in different countries, the scourge of disinformation about them is to boot [23]. However, conspiratorial beliefs regarding MPX are not a novel phenomenon as similar concerns have been reported previously in endemic regions for the virus [24]. For example, a previous study in the Republic of the Congo reported the endorsement of false notions. This included beliefs that the virus was deliberately introduced into the area and even disbelief in the existence of the disease [24]. Currently, the main MPX conspiracies focus on the timing of the outbreak, the virus' real origin, and the usual "who profits from it?" issue. One of the conspiracy theories was related to the prophecy of the Simpsons, a 33-year-old cartoon, about the MPX outbreak. Before the MPX outbreak, this cartoon was claimed to foresee some events, such as the COVID-19 pandemic [25]. Tweets and Facebook posts showing misleading images went viral [26]. Similar to COVID-19, MPX was regarded as a hoax created by a global cabal. It was also alleged that MPX could have political routes as well as it could be human-made and intentionally deliberated [27]. Other rumors that lacked any evidence, connected the emergence of MPX to COVID-19 vaccines [28]. In addition, MPX is being weaponized to attack black and Lesbian, gay, bisexual, transgender, queer, and ally communities across the world [29]. It should be noted that some racial, homophobic, and transphobic tinges always backed these kinds of conspiracy theories [30]. Similar to previous disease outbreaks , conspiracy theories have also focused on pharmaceutical companies' role in exaggerating the severity of MPX for financial and political gains and the marketing of MPX vaccines [18]. However, the normalization and mainstreaming of conspiracy theories represent a big concern [31]. This is a trend that is not going away so easily, and it will likely continue to evolve. The latter was driven by several factors, such as the rise of social media and the growing media polarization [32,33]. In addition, these theories are also very useful, because they are cheap and effective political weapons. Of note, conspiracy beliefs could have significant concerns for the prevention, treatment, and aftermath of disease outbreaks [34,35], such as decreasing compliance with disease-prevention measures [36,37] as well as increasing people's reluctance to vaccination [37,38]. To date, no study in Lebanon tackled this subject despite the widespread conspiracy beliefs in the country during the COVID-19 pandemic and the vaccine hesitancy encountered during the national roll-out of the COVID-19 vaccination plan. Therefore, it is important to explore the extent of belief in emerging virus conspiracy theories among the Lebanese population. Understanding conspiracy beliefs and their underlying mechanisms are of great interest because of their relationship with both nonpreventive behaviors and refusal of vaccination and also with government attitudes. Thus, even comparatively small numbers of individuals who endorse conspiracy beliefs could undermine health-related efforts. This study will provide an overall picture of the conspiracy beliefs' popularity toward emerging diseases among the Lebanese population and offers a unique perspective about its associated factors to orient interventions that aim to reduce people's reliance on conspiracy theories and, therefore, limit their harmful effects. The overarching objective of this study was to assess the extent of endorsement of conspiracy beliefs regarding emerging diseases with a special focus on MPX and to identify its associated factors. --- Materials and methods --- Study design As part of a larger project focusing on the knowledge, attitudes, and beliefs of the Lebanese population toward MPX, a web-based cross-sectional survey was conducted during the first 2 weeks of August 2022 among Lebanese adults. All Lebanese adults aged 18 years or above, willing to read and understand the Arabic language, and living currently in Lebanon, were eligible to participate in this study. Given its online nature, this study excluded adults who lacked internet literacy and those who did not have access to internet service at the time of the study. In addition, non-Lebanese adults, Lebanese adults living outside Lebanon, and those who refused to participate were also excluded from the study. A convenience sampling method was used to recruit participants from all Lebanese governorates . --- Questionnaire development A self-administered, questionnaire was initially developed in English. The questionnaire was designed to cover MPX knowledge among Lebanese adults as well as their attitudes toward precautionary measures, country preparedness, and their beliefs regarding conspiracy theories. The internal consistency reliability of the English version of both knowledge and attitudes sores , country preparedness , conspiracy beliefs was estimated using Cronbach's alpha, where its value α ≥ 0.70 was considered satisfactory [39]. To assess the content validity and to confirm whether the tool adequately comprises all the items necessary to cover the study objective, an expert panel composed of eight members was appointed [40]. Experts were defined as individuals who had a good understanding of the MPX and were aware of the MPX-related conspiracy theories. Experts assessed the relevancy or representativeness and clarity of the items to measure the construct operationally‫‬ defined by these items [41]. Based on the content validity index calculated both at the item level and scale level , the panel of experts judged that the questionnaire had good content validity [42]. Based on the standard translation guidelines [43], the original English draft of the questionnaire was translated and adapted to the Arabic language. Any suggested change was resolved by consensus. The reliability of the Arabic version of the questionnaire was also confirmed. A pre-test of the questionnaire was performed among 30 Lebanese adults from different Lebanese governorates to ensure the readability, clarity, and comprehensibility of the questions as well as the survey flow. Based on the feedback of the participants in the pre-test, minor changes were made to the questionnaire including the replacement of some ambiguous words. The average time for completing the survey was 15 min. The questionnaire consisted of three main sections and open-ended questions: • The baseline characteristics of the study participants section included the sociodemographic variables (age, gender, marital status, urbanicity, education level, and occupation of the participants. Participants were also asked to rank their health and economic status. They were also queried if they have been diagnosed with MPX and whether they know someone infected by the MPX. • The attitude section comprised the following three subsections: • Attitude toward MPX conspiracy beliefs: The items used by a study conducted by Freeman et al. [44] on coronavirus conspiracy beliefs were adopted and extended to cover the current MPX-related con-spiracy theories and to assess the participants' attitude toward conspiracy explanations of emerging virus infections, particularly the MPX. The assessment was performed using 16 items with a 3-Likert scale of possible responses (disagree [1], neutral [2], agree [3]. As previously mentioned, the content validity of the scale was assessed and The internal consistency of the score was ensured by calculating a Cronbach's alpha value. The following items comprised the emerging virus infections conspiracy scale adapted to the current MPX outbreak as well as four additional items. "I am skeptical about the official explanation regarding the cause of MPX emergence", "I do not trust the information about the viruses including MPX from scientific experts", "Most viruses including MPX are man-made ", "The spread of viruses including MPX is a deliberate attempt to reduce the size of the global population", "The spread of viruses including MPX is a deliberate attempt by governments to gain political control", "The spread of viruses including MPX is a deliberate attempt by global companies to take control including pharmaceutical companies manufacturing vaccines", "The spread of MPX is a deliberate attempt to attack African people and to enhance discrimination", "The spread of MPX is a deliberate discriminatory attempt to attack LGBTQ+", "The control measures including Lockdowns in response to emerging infection are aimed for mass surveillance and to control every aspect of our lives", "The control measures in response to emerging infection are aimed for mass surveillance and to destabilize the economy for financial gain , "The control measures including lockdown is a way to terrify, isolate, and demoralize a society as a whole to reshape society to fit specific interests", "Viruses including MPX are biological weapons manufactured by the superpowers to take global control", "Viruses including MPX were a plot by globalists to destroy religion by banning gatherings", "The mainstream media is deliberately feeding us misinformation about the virus and lockdown", "The COVID-19 vaccination is linked to the MPX outbreak" "Microsoft co-founder have a role in the outbreak". All attributed values were summed and the overall level would range from 16 to 48 points. Higher scores pointed to a higher level of conspiracy beliefs regarding MPX virus emergence and subsequent intervention measures. Of note, the conspiracy belief scale was categorized into two categories: low and high levels based on the scale median. --- b. Attitude toward the country's preparedness and readiness to respond to a potential MPX outbreak: this subsection included 16 items covering different the adequacy of activities taken by Lebanon to prepare for a potential MPX outbreak as well as their confidence and trust about the ability of the country to deal with a potential outbreak. Of note, these questions were developed based on the current Lebanese situation and took into consideration the multilayered crises. Responses to questions related to attitude were ranked on a 3-point Likert scale, an agreement scale ranging from "1" for the option "disagree" to "3" for the option "agree". A point of 1 was given to the respondent's answer that selected the "agree" option, while "disagree" or "neutral" responses were given a score of 0 points. Alike the knowledge score, the attitude score in each domain was categorized using the original Bloom's cutoff point, as positive if the score was 60-100% and negative if the score was less than 60%. c. Lebanese adults' attitudes toward the effectiveness of the recommended prevention measures against MPX: this subsection comprised the following items isolation is an effective technique to prevent the spread of MPX regular hand hygiene, physical distancing, and facemask use could protect people from catching MPX, Keeping up with the information regarding the government's call for MPX preventive efforts is important for the community and People with MPX who isolate themselves show that they have a responsibility in preventing the transmission of COVID-19. Similarly, to the previous subsection, responses were answered using a 3-point Likert scale. This score in each domain was also categorized using the original Bloom's cutoff point. d. Knowledge about the MPX section: It consisted of 55 items covering different aspects of MPX knowledge . A detailed description of this section was already tackled by another manuscript focusing on the knowledge of MPX among the Lebanese population. Each question was answered on a true/false basis and with an additional "I don't know" option. A value of '1′ was attributed to a correct response and a value of 0 was attributed to "wrong" or don't know. All attributed values were summed and the overall level would range from 0 to 55 points. Using modified Bloom's cutoff point, the overall knowledge level was categorized as good if the score was ≥ 60% , and poor if the score was less than 60% . Participants were also inquired about their sources of information regarding HPMX. --- Sample size calculation Using the Raosoft sample size calculator, the required sample size was calculated. No previous study in Lebanon examined the population's belief in MPX conspiracy beliefs; a conservative estimate of 50% was used. Based on a 95% confidence level, and an absolute error of 5%, a minimum sample size of 383 was required. A larger sample was collected to reduce errors related to the sampling technique and to increase the study power, a rough estimation was made by multiplying the calculated sample size by 2 times, leading to a final sample size of 793 participants. --- Ethical considerations Informed electronic consent was obtained from each participant before his enrolment in the study. The research protocol was properly reviewed and approved by the ethical committee at Rafic Hariri University Hospital . All methods were performed following the ethical standards as laid down in the Declaration of Helsinki and its later amendments. Participants were reassured that the participation is voluntary and that they were free to withdraw at any time. In addition, information was gathered anonymously and handled confidentially. --- Statistical analysis The generated data on an excel spreadsheet were transferred to the statistical software IBM SPSS ® software version 24.0 for analysis. Descriptive statistics were reported using frequency with percentages for categorical variables. The responses to all questions were mandatory; therefore, no missing data to be a substitute. For descriptive analysis, frequency and percentage were used for categorical variables, and mean and standard deviation for quantitative variables. The normality distribution of knowledge scale items, conspiracy scale items, and each attitude items scale was confirmed by the calculation of skewness and kurtosis values which are lower than 1 [45]. The Chi-squared test was used to compare the means between 2 categorical variables. All variables that showed a p value < 0.2 in the bivariate analysis were included in the multivariable analysis as independent variables. Binary logistic regression using the stepwise method was conducted to identify the correlates of the conspiracy belief scale, after checking the absence of multi-collinearity. P values less than 0.05 were considered to be statistically significant for two-sided statistical tests. --- Results --- Baseline characteristics of the participants A total of 793 Lebanese adults from all Lebanese governorates completed this study. Most of the participants were females and nearly half of them were married and aged between 18 and 30 years. Around the third quarter of them had a university level or above and were living in urban areas. More than 70% of them performed outside the healthcare sector. Of note, the majority of surveyed adults reported good health status and were not suffering from chronic diseases or immunodeficiency . In terms of the economic situation, more than the third quarter ranked their current economic situation as low . --- Attitudes of the Lebanese adults Conspiracy beliefs toward MPX After checking skewness and Kurtosis , the EVICS score was found normally distributed. The MPX conspiracy scale ranged between 16 and 48 with a score mean of M = 32.67, standard deviation , and median = 34. Regarding conspiracy scale items holding the highest level of agreement, 59.6% of the study respondents agreed that the spread of the MPX virus is a deliberate attempt to a deliberate attempt to reduce the size of the global population . Similarly, 56.6% of them endorsed that the spread of MPX is a deliberate attempt by authorities to gain political control or by global companies to take control including pharmaceutical companies for financial gains and the marketing of MPX vaccines . Around half of the Lebanese adults found that most emerging viruses including MPX are planned and man-made . Of note, 34.7% of Lebanese adults were skeptical about the official explanation regarding the cause of MPX emergence and 33.0% of them do not trust the information given by scientific experts about MPX. As for considering viruses including MPX as biological weapons manufactured by the superpowers to take global control, only 20.9% of the study respondents agreed with this claim. Other MPX conspiracy theories were less believed by the Lebanese population such as those that are tinged with a discriminatory background. For example, a limited proportion of Lebanese believed that MPX is a deliberate discriminatory attempt to attack African people or LGBTQ + . Remarkably, only 2.1% of respondents linked the COVID-19 vaccine to the emergence of MPX . --- Attitudes toward the preparedness and readiness of the country Briefly, 4.4% of the surveyed adults thought the country has adequately scaled up the preparedness and response plan for MPX and only 10.3% of them believed in the aptitude of the Lebanese population and the health authorities to control locally a potential outbreak. Around the third quarter of them believed that the MPX will add a burden to the Lebanese healthcare system which already suffered from a longstanding weakness and this will give the MPX virus a chance to become entrenched . Furthermore, around half of them thought that Lebanon will be unable to cope with a potential outbreak of MPX due to its current economic crisis. They also alleged that employees' strikes will delay the detection of suspected MPX cases and that allowing people to travel to MPX epidemic countries is dangerous . Around half of them thought that the Lebanese response to MPX was sluggish and timid and that obstacles to good preparedness were systemic and existed at each level of the government. In terms of MPX surveillance, half of the surveyed adults assumed that surveillance is spotty and 35% of them believed that MPX is currently spreading in Lebanon and the real number is likely to be much higher than the official case counts. As for laboratory testing capacities, only 18.5% agreed that Lebanon has the laboratory capacity to rapidly detect and glean the extent of the outbreak. In terms of MPX awareness, less than 5% of participants thought that MOPH has well-informed the public regarding MPX, its risk factors, and its specific precautions measures. Therefore, 67.20% of them assumed that Lebanese people lacked knowledge about MPX and need to learn more . --- Attitudes toward the effectiveness of the precautionary measures As shown in Fig. 2, 68.3% of participants agreed that keeping updated on the recommendations issued by the government regarding MPX preventive measures is essential for combating the disease. In addition, more than half of them believed that isolation is an effective measure to stop the spread of MPX as well as regular hand hygiene; physical distancing and facemask use could protect people from being infected with MPX . Of note, 48.3% of them thought that people with MPX who isolate themselves show that they have a responsibility in preventing the transmission of MPX. --- Overall knowledge Level of MPX among the Lebanese population The majority of the Lebanese population had a poor level of knowledge regarding MPX . Most of --- Attitudes scores As shown in Fig. 3, the majority of surveyed adults had high conspiracy beliefs toward MPX, and more than the third quarter of them exhibited a negative attitude toward the government's preparedness for a potential MPX outbreak. On the other hand, the majority of them revealed a positive attitude toward MPX precautionary measures. --- Relationship between conspiracy beliefs, knowledge, attitude toward government, and attitude toward precautionary measures A significant difference was found in terms of conspiracy beliefs between participants with a poor knowledge level regarding MPX and those with good knowledge. Poor knowledge was found significantly associated with higher conspiracy beliefs about MPX. A significant difference was also found between adults with a positive attitude and those with negative attitudes in terms of the level of endorsement of conspiracy theories. On the contrary, a positive attitude toward precautionary measures or government preparedness was associated with lower conspiracy beliefs . --- Relationship between conspiracy beliefs and sociodemographic variables Table 3 shows the relationship between the knowledge score and the socio-demographics of respondents. A significant difference in terms of gender , age groups , marital status , education , health status , and economic situation . All these variables had a significant relationship with MPX conspiracy belief. Age was positively associated with MPX conspiracy belief score, which increased, with most participants aged 50 and above scoring high conspiracy belief . A gender difference was also revealed as females showed lower MPX conspiracy beliefs compared to male participants. A high level of MPX conspiracy beliefs was more likely to be found among divorced/widowed compared to married and single participants. A significant decrease level of MPX conspiracy belief was found among participants with higher education levels compared to those having a secondary level of education or below . No difference was found in terms of urbanicity or occupation. There was also a significant variance between the MPX conspiracy score and the health status and economic situation. Participants with lower economic backgrounds and with fair health status reported a higher level of conspiracy beliefs. As shown in Fig. 4, a significant difference was also revealed between residency governorates, where participants living in Bekaa showed a higher MPX conspiracy beliefs level. --- Relationship between conspiracy beliefs and sources of information A significant difference was revealed between MPX conspiracy beliefs and sources of information used to get information about MPX . The majority of respondents who used social media reported higher conspiracy beliefs compared to those who did not use this source to get information . As for those relying on health authorities , health professionals , and health websites , they reported lower conspiracy beliefs compared to surveyed adults who did not use these sources. As for media and family or friends, no significant difference was revealed in terms of MPX conspiracy beliefs between users of these sources and those who did not use them. --- Factors associated with MPX conspiracy beliefs High conspiracy beliefs level was found negatively associated with female gender, geographical area, and good health status. Female participants were less likely to have a higher embrace of conspiracy beliefs compared to males . --- Discussion Conspiracy theories go viral in times of societal crisis [2,14] and the belief in health conspiracy theories is prevalent [14]. The COVID-19 playbook developed during the pandemic has served as a primary material for any infectious disease conspiracy theory and now, it is the turn of MPX to face the same destiny as COVID-19. Based on data from a representative survey of the Lebanese population, this study is the first to explore the extent of belief in emerging virus conspiracy theories with a special focus on MPX among the Lebanese population and determine and correlate knowledge of the MPX, sociodemographic variables and sources of information with conspiracy beliefs. It provides a backdrop of conspiracy beliefs in Lebanon and could be helpful to prepare for a potential outbreak and combatting concomitant infodemic by reducing people's reliance on conspiracy theories and thus limiting their harmful consequences especially since the public seems vulnerable to social media outlets afflicted by a scourge of misinformation. Of note, the results of this study, conducted at the national level, corroborate and extend those of existing research at the international level. --- Main findings A high level of conspiracy beliefs regarding emerging virus infections including MPX was found among 59.1% of the surveyed adults. The extent to which people embrace these theories varies significantly across the various geographical regions, socio-demographic characteristics as well as their knowledge levels and attitudes toward government and precautionary measures. In addition, social media was implicated in increasing conspiracy beliefs among the population. In terms of conspiracy beliefs, the wide prevalence of endorsing such beliefs among the Lebanese population regarding MPX is expected especially in the era of pandemics. A previous study showed that believing in health-related conspiracy theories is universal [14]. In addition, emerging diseases including COVID-19 and MPX could be an almost ideal breeding ground for these beliefs [20]. Our results were consistent with the findings of a study conducted among Jordanian university students who also exhibited a high level of conspiracy beliefs despite their higher educational level [46]. One possible explanation for the large extent of endorsement of conspiracy theories among Lebanese people could be related to their inability to gain control over the multilayered crises encountered by the country in the real world. Previous research reported that conspiracy thinking is common, especially during high times of economic, political, and social crisis [47]. Hence, the overlapping emergencies in the country including the ongoing economic and political crises, as well as the strained public health system, could explain to a certain extent the temptation of Lebanese adults to fall back on conspiracy theories to rationalize the unexpected. For that reason, they may try to restructure an illusion to adjust to their stressful life events [48]. In addition to the lack of sense of control, the information and knowledge vacuum revealed in a previous study regarding MPX could trigger such conspiracy thinking. Humprecht et al. theorized that a country's resilience to conspiracy theories depends on several media systems, political, and economic indicators [49]. Finally, such incline among the Lebanese adults to believe in conspiracy theories and misinformation should not be marginalized or regarded as a fringe phenomenon with a minor impact on real-world engagements, as several studies conducted across different countries demonstrate the detrimental consequences of these beliefs on the self, others, and society at large [34,50,51]. The latter could affect essentially the prevention, treatment, and aftermath of disease outbreaks [34,51] such as compliance with disease-prevention measures [36,52] and vaccine hesitancy [37,38]. In regards to the highest endorsed conspiracy theories, participants embrace particularly those linking the virus to a deliberate attempt to reduce the size of the global population, gain political control or pharmaceutical companies' financial gain, in addition to the manmade origin of MPX . Similar to previous disease outbreaks , conspiracy theories focus on pharmaceutical companies' role in exaggerating the severity of MPX for financial and political gains [18] and the marketing of MPX vaccines. It was alleged also that this was human-made and intentionally deliberated. Of note, these conspiracy theories were copied from the COVID-19 playbook which is not surprising, since MPX re-emergence coincided with the COVID-19 pandemic and both are viral infections. Findings from this study indicated also a significantly low agreement about the government preparedness and response toward a potential MPX outbreak and this negative attitude was found to lead to a higher conspiracy belief level. These findings could be understood in light of the current Lebanese situation as the country navigates dark times and the population struggle amid economic collapse and political instability [53]. The emerging diseases have added fuel to the fire. In addition, the country faces a growing shortage of medical supplies and essential medicines leaving the most vulnerable people at risk [54]. Therefore, one possible explanation could be the disgust of the Lebanese population toward the political system that increases epistemic mistrust toward government preparedness and the tendency to believe in conspiracy theories. Our results were consistent with the findings of a previous study regarding the association between disease-related conspiracy theories and lower levels of trust in governmental and health institutions [55]. The unveiled negative attitude toward government preparedness brought attention to the important role of the Government and health authorities in communicating risk and involving mass media in developing the beliefs of the community about the government's actions and boosting people's trust toward the adequacy of these measures. On the other hand, this study revealed a positive attitude toward the effectiveness of precautionary measures among the majority of Lebanese adults. Such a finding could be attributed to the good knowledge level of this domain found by Youssef et al. in a recent study conducted among the Lebanese population [56]. Of note, earlier studies [57,58] suggested that the success of national response strategies in fighting emerging diseases and the increase of the community's compliance with preventive measures [59] depends on this community's attitudes toward the importance of preventive measures. In other words, these attitudes play a key role in inducing people's self-protection behaviors [60,61]. Furthermore, the study findings meaningfully extend previous research on the link between conspiracy theories and attitudes toward precautionary measures in the context of emerging diseases, where a high level of conspiracy beliefs was found among people who exhibited a negative attitude toward precautionary measures. These results were in line with the findings of a previous which found that disease-related conspiracy beliefs are associated with less willingness to follow restrictive measures to inhibit the further spread of the disease [52]. In terms of socio-demographics, some interesting findings came to light. First, age was significantly associated with conspiracy beliefs: older respondents believed more strongly in these narratives than the younger generations. On the contrary, other studies emphasized that young people are more inclined to embrace such beliefs [62,63]. Lebanese older individuals were with virtually no national welfare system and were stranded amid their country's worsening economic catastrophe. In their prime, they survived years of civil war, economic crises, and bouts of instability. Many of them are now living in poverty as a result of one of the world's greatest financial crises in the previous 150 years. Therefore, older people who felt powerless were more prone to believe emerging disease-related conspiracy theories. Although that older generations, tend to consume slightly less social media, a study showed that people affected by historical traumas tend to interpret current events using conspiratorial frameworks developed during the traumatic event "traumatic rift" [64]. Therefore, older adults represented an important target group to be addressed specifically when planning information or prevention campaigns using more easily accessible and understandable information about the outbreak. Second, gender was a significant factor in believing in conspiracy theories. This study showed that males were more likely to believe in conspiracy theories than females. However, no clear pattern of gender differences in endorsing conspiracy theories was revealed in previous studies and its results are mixed. For example, some studies reported that women are more likely to consider COVID-19 conspiracy theories [65][66][67], others found no gender differences [44,68], while some found that men are more likely to endorse COVID-19 conspiracy theories [69] which are in line with our results. Interestingly, marital status was significantly associated with a higher level of emerging disease conspiracy beliefs. Although some studies did not find any association between marital status and conspiracy beliefs [70], this study showed that divorced/widowed adults were more prone to believe in conspiracy beliefs than married or single adults. One possible explanation could be that beliefs in conspiracy theories are indeed such remnants of human adaptation to historical traumas and the high level of beliefs in these theories is related to personality characteristics, antecedents, and a sense of powerlessness [71,72]. As for divorce, it could be psychologically traumatic, because if unexpected, the individual could feel shocked and powerless by the event. Hence, the divorcee could also feel pain, confusion, and deep, emotional scarring. As for the widowed, the death of the partner constituted by itself a traumatic event. In addition, loneliness and the inability to rely on a partner could also increase the embrace of conspiracy beliefs. A study showed that people who were less secure and more avoidant were among individuals endorsing the conspiracy item [73]. Given these results, it may be worth exploring more factors increasing higher conspiracy beliefs among this group. One peculiar finding was that adults living in the Bekaa governorate exhibited a higher level of conspiracy beliefs compared to participants from other governorates. It should be noted that geographical social structures that shape citizens' feelings of vulnerability and powerlessness could predict conspiracy beliefs [74]. In general, people from Bekaa were considered conservative and studies showed that conservative people are more likely to endorse conspiracy theories [75]. In addition, belief in conspiracy theories is highly sensitive to social context. A deep understanding of Bekaa's environment and the drivers of such a higher level of conspiracy beliefs among Bekaa people is recommended. Inconsistently with previous research [1,46,76,77], educational attainment was not found associated with the extent of endorsement of conspiracy beliefs among the Lebanese population. As for adults with good health status, they were found less likely to embrace emerging diseases conspiracy beliefs. This could be explained by the fact that this category feels safer and did not consider themselves at high risk of the disease. Regarding the economic situation, this study showed that participants with low economic situations were more likely to embrace conspiracy beliefs. Previous studies revealed relationships between lower income [78] and a higher endorsement of conspiracy theories. In addition, people who perceive their economic situation as deteriorating tend to view the perpetrators as collectively hoarding resources. In terms of knowledge level, participants with low knowledge levels were found more likely to endorse conspiracy beliefs. In line with our findings, a study assessing knowledge of MPX viral infection among the general population in Saudi Arabia found that myths believers had low MPX knowledge levels [79]. Likewise, a study conducted by Sallam et al. [46] showed that a high MPX knowledge score was associated with a lower embrace of conspiracy belief. One possible explanation is that knowledge emphasizes analytic thinking logically and provides rational explanations against conspiracy theories. In line with the literature, this study showed that the use of social media to get information about MPX was associated with greater conspiracy beliefs level. Several studies reported that conspiracy thinking is associated with a tendency to acquire information through digital media which include the internet and social media [49,80,81]. However, this was an anticipated and not surprising phenomenon as digital media systems contain misinformation and panic at the same velocity [82] inside all the important late-breaking data. Therefore, social media has been considered a breeding ground for misinformation, such as conspiracy theories [80] which are repeated and perpetuated and this puts public health at a constant in the crosshairs. However, the negative side of social media also comes with a positive impact, hence is crucial to improve our systems understanding of the flow of information and do more to protect people from harmful content related to the outbreaks. --- Limitations The results of the study should be interpreted in light of several limitations that included: the cross-sectional design of the study which does not allow us to infer causality, hence, its findings should be interpreted as correlational. Selection bias is possible due to the convenience technique that was used to collect data which limits the generalizability of the findings. However, our sample was large enough to decrease to some extent bias related to the sampling technique and to increase the study power. Some drawbacks related to the online nature of the study should be admitted such as the difficulty to obtain a truly random sample of participants as the participation is limited to those having internet service, active users, and those who are available at the time of the researchers post the instrument and start the data collection. In addition, the survey may only have been completed by those who were digitally literate and those who were sufficiently interested in the topic to take the time and trouble to respond . Finally, this survey was distributed through social media platforms; therefore, there is no way of identifying, and describing the population that could have accessed and responded to the survey. --- Implications While the world is fighting emerging diseases, it is also combating an infodemic in which falsehoods tend to spread faster than truths and evidence. Therefore, conspiracy beliefs should receive a lot of attention during health crises. The present findings add to the body of research on conspiracy beliefs by assessing the level of conspiracy and its associated factors. Our findings can also be helpful for better planning for a potential MPX outbreak especially since cases of MPX are still scarce in the community. Interventions emphasizing and boosting analytic thinking through the provision of logic and rational arguments against specific conspiracy theories should be developed by policy-makers to reduce the appeal of these theories among Lebanese adults. In addition, enhancing feelings of trust, security, and a sense of confidence among the public is also recommended. As we prepare for MPX, it is also important to deepen our understanding of how people think, feel, and behave during disease outbreaks in particular. Since there is still much-unexplored territory to be discovered in the psychology of conspiracy theories, forthcoming studies exploring the potential impacts of these theories on health behaviors that prevent the spread of infectious diseases were suggested as well as the human tendency to believe conspiracy theories. Furthermore, longitudinal studies assessing conspiracy beliefs at multiple timepoints are of great interest. Finally, disgust could be associated with people's perception of moral violation in the system and make people suspect that the government is not telling to them the whole story about MPX; therefore, an extensive examination of the role of disgust toward the political system in conspiratorial tendencies among the Lebanese population is recommended. --- Conclusion Considering the high level of conspiracy beliefs toward emerging diseases found among the Lebanese population is a vital step to help the country better prepare for the outbreak. Policy-makers should be vigilant and take this phenomenon seriously. Finding ways to reduce people's reliance on conspiracy theories is, therefore, imperative. Future studies exploring the potential impacts of conspiracy theories on health behaviors were recommended. --- --- Abbreviations --- --- --- --- Competing interests The authors declare that they have no competing interests. ---
The non-endemic multicountry outbreak of monkeypox (MPX) has emphasized the issue of conspiracy theories that go viral in times of societal crisis. Now, it is the turn of MPX to join COVID-19 in the conspiracy theory realm. Social media outlets were flooded by a scourge of misinformation as soon as MPX cases began to appear with an evident cross-pollination between diverse conspiracy theories. Given the adverse consequences of conspiracy beliefs, this study aimed to assess the extent of endorsement of MPX conspiracy beliefs among the Lebanese population and to identify its associated factors. Methods Using a convenience sampling technique, a web-based cross section was conducted among Lebanese adults. Data were collected using an Arabic self-reported questionnaire. Multivariable logistic regression was performed to identify the factors associated with the MPX conspiracy beliefs scale.Conspiracy beliefs regarding emerging viruses including MPX were detected among 59.1% of Lebanese adults. Participants endorsed particularly the conspiracy theories linking the virus to a deliberate attempt to reduce the size of the global population (59.6%), gain political control (56.6%) or pharmaceutical companies' financial gain (39.3%), in addition to the manmade origin of MPX (47.5%). Remarkably, the majority of surveyed adults exhibited a negative attitude toward the government's preparedness for a potential MPX outbreak. However, a positive attitude was revealed toward the effectiveness of precautionary measures (69.6%). Female participants and those having a good health status were less likely to exhibit a higher level of conspiracy beliefs. On the contrary, divorced or widowed adults, those having a low economic situation, poor knowledge level, and negative attitude either toward the government or precautionary measures were more prone to disclose a higher level of conspiracy beliefs. Notably, participants relying on social media to get information about MPX were also more likely to have a higher level of conspiracy beliefs compared to their counterparts.The widespread extent of conspiracy beliefs endorsement regarding MPX among the Lebanese population urged the policymakers to find ways to reduce people's reliance on these theories. Future studies exploring the harmful impacts of conspiracy beliefs on health behaviors are recommended.
INTRODUCTION On November 30, 2022, OpenAI released a demo of ChatGPT, a chatbot powered by a large language model 1 . The chatbot's impressive ability to converse and provide information immediately drew international attention, attracting over one million users in just a few days [102]. While LLMs had been under active development for several years, and build on technologies that have been researched and used for decades, ChatGPT's release was heralded as a disruptive moment in which the general population, as well as many technologists, became aware of a significant leap in AI's capabilities. Since then, the rapid uptake of generative AI systems such as ChatGPT [121], Bard [2], DALL•E [120], Imagen [128], and Midjourney [106] has been accompanied by striking narratives. These include discussion of its increased ability to perform complex tasks and predictions about how generative AI will disrupt knowledge industries . These narratives imagine generative AI as a resource that can automate much of the knowledge work currently done by humans, thus having detrimental effects on labor such as eliminating significant numbers of jobs across multiple industries [150]. However, while initial evidence points to productivity gains when generative AI is used for particular tasks [28,156], much remains unknown regarding its future impact. This unique moment in the uptake of generative AI offers a timely opportunity to deeply consider expectations of its future use. To better understand how people anticipate generative AI may affect knowledge work in the future, we 2.3.1 Generative AI Impacts on Worker Tasks. HCI scholars are beginning to examine the impacts of generative AI on different kinds of work tasks. A dominant thread of research focuses on different writing tasks [63,130,138,142,162,164], including marketing slogans [35] and storytelling [131]. Generative AI systems can significantly influence what topics users write about and how they are framed [78]. For instance, when people use a generative AI writing assistant , they write more frequently about topics suggested by the system [123]. Writing suggestions made by generative AI systems also influence the tone and sentiment of communications [10,18], including length and how generic the text is [11]. However, user experience of generative AI systems is not simply passive, but also involves cognitive work to negotiate machine-in-the-loop writing approaches [42,142]. As creative work is not necessarily outcome-focused, but process-focused, there remain gaps in our understanding of the impacts of generative AI in explicitly commodified labor contexts. An emerging thread of HCI scholarship has focused on potential impacts of generative AI on workers in professionalized settings, examining potential professional impacts of generative AI, including how it may improve knowledge workers' task efficiency [7,167]. A limited body of research has examined impacts on specific professional sectors, notably creative [44,68,77] and educational [5,85,100] domains. While some research suggests that, overall, creative professionals are not yet worried about generative AI-related job displacement, they did flag concerns related to worsening work quality, an erosion of the creative process, and training data copyright concerns [77]. Within the education domain, research has focused on how generative AI can support classroom teaching in terms of preparing course materials [100,135] and student learning outcomes [85]. However, there remains a dearth of research on generative AI's impacts on professional industries, particularly outside of creative contexts. --- Sensationalized Generative AI Narratives The rapid uptake of generative AI systems has been accompanied by a Sensationalized Narrative about how it will disrupt society and work. Building on previous AI narratives, it contains dramatic or even hyperbolic claims about topics such as AI sentience and labor displacement . Leading figures in AI have framed expectations in evocative terms , for example, likening AI to "new electricity" to emphasize the expectation that it will disrupt most or all industries [114]. The media, futurists, and others have also explored generative AI's potential, pondering for example whether AI could "replace humans" [98]. Exaggerated claims about machine intelligence envision a global underclass of human workers supplanted by generative AI [80], with one headline musing: "Could ChatGPT Write My Book and Feed My Kids?" [73]. While such narratives may draw on claims made by AI developers to predict a worse case scenario for knowledge work , they often incorporate evidence in misleading ways . Moreover, they rarely engage the perspectives of knowledge workers with expertise in their field. While some argue generative AI is at the peak of inflated expectations [45,154], sensationalized AI narratives can narrow and close public debate [84]. In response to the hype cycle [81], there have been calls for more measured narratives that create space for publics to meaningfully consider and grapple with the labor questions of generative AI [16,140]). This includes consideration of the specific ways generative AI may interact with different kinds of professional work [82] and also broader knowledge industry dynamics. --- Public Perception of AI Because the public's awareness of generative AI is quite recent, little research has been done on public perception of it. However, a fair amount of research has been done on public perception of AI more generally, much of it surveybased [8,14,20,31,52,60,83,97,112,115,125,136,147,163,166]. Respondents typically expect AI will have a significant impact on the future, and often anticipate that beneficial effects are possible, with the most favorable impressions in emerging and/or Asian markets and more negative impressions in Western countries such as the US [8,52,60,97,112,115,147,163]. At the same time, AI is neither interpreted as exclusively beneficial nor exclusively disadvantageous, and public response often indicates contradictory emotions [20,87,88,109]. Job loss , increased social isolation, privacy and other social topics have been highlighted as key concerns [9, 52, 86-88, 125, 143]. In fact, a recent Pew survey found that, among the 37% of US respondents who indicated they were "more concerned than excited" about the increased presence of AI in daily life, about one in five explicitly mentioned job loss as the main reason for their concern [125]. --- METHODOLOGY To explore the expected impact of generative AI applications in knowledge industries, we conducted participatory research workshops with 54 participants from seven knowledge industries in three US cities. While our institution does not have an IRB, we adhere to similarly strict standards. Our research objectives were to learn more about the following: • How do knowledge workers expect generative AI will affect their industries? • How do knowledge workers view generative AI in relationship to other changes they anticipate in their industry? --- --- Workshops Participatory workshops have a long methodological history in HCI, used frequently in participatory design engagements [70,94,133,149], living labs [46], hackerspaces [58], and more. The participatory workshop as a research site and method has roots in participatory action research [37]. Inspired by participatory action research, we used participatory workshops to engage with specific communities of practice representing knowledge industries. Rather than focusing on a design process or future technological possibilities, our participatory approach centered on the elicitation of nuanced contextual information about each industry, participant perspectives, and generation of use cases for generative AI. This approach was grounded and situated in the participants' expertise and experiences, and supported by conversation with the researchers as experts familiar with technological capabilities and possibilities of generative AI. We thus structured our workshops to include participant-directed educational activities and a mix of probes and provocations to scaffold generative discussion. We held one workshop per industry in July 20233 in Columbus, Ohio; New York City, New York; and Oakland, California . These cities were selected as centers of activity for the respective industries, as well to represent a socio-geographic range. Each workshop was held in person for three hours at a third-party facility. Three to four researchers from Google and Gemic moderated each workshop to facilitate and share information about generative AI. Participants and researchers sat together around a large table to facilitate group discussion. Two videographers also attended each workshop and participants were aware that several researchers and staff were in an observation room or viewing remotely. Participants were aware of Google's involvement in the study. We followed COVID-19 precautions , and refreshments were available throughout the workshops. Before the workshops began, we asked participants to draw a map of their industry as a short pre-work activity (Appendix: Figures 1 and2). Workshops ran as shown in Table 1. --- Activity Time Arrive, sign-in, COVID- We began with introductions and expectations for the workshop, and then invited participants to share any prior experiences or impressions of generative AI. We then gave a presentation to provide participants with a common working understanding of generative AI to ground the subsequent activities, allowing a generous amount of time to answer questions. The presentation included a brief overview of how generative AI models learn and generate content, how they differ from other AI models, and key characteristics and risks of generative AI . After a meal break, we introduced participants to an artifact that we designed specifically for the workshop: a large, physical card that we term a change card that encourages participants to reflect on important changes that could happen in their industry in the next one to two years; changes did not need to be related to generative AI . Each participant spent about 10 minutes filling out their individual change card. We then facilitated a group discussion in which participants shared one or more of their change card and responded to each other's ideas. Following another short break, we invited participants to write down the broad contours of a policy to guide the use of generative AI in their industry . Participants were again given about 10 minutes to work individually before taking turns presenting their policies in a facilitated discussion. We followed the discussion with an exercise in which participants voted on which aspects of generative AI they find most helpful and most detrimental for their industry. After a brief discussion of this voting activity, in the final minutes of the workshop, participants were encouraged to speak about the potential impact of generative AI outside of their work. Throughout the workshops, we took care to encourage collaborative interpretation, problem-solving, and discussion among participants and moderators, and to make space for all participants to share their ideas and opinions. --- Analysis All sessions were recorded and transcribed verbatim with an automated speech-to-text service, and we then manually corrected the transcripts against the original recordings. All artifacts were collected and archived. We analyzed data from the corrected transcripts and artifacts inductively. Drawing on reflexive thematic analysis approaches [22,23], four authors reviewed transcripts, in multiple configurations of paired and independent review, and one of these authors conducted comparisons with corresponding artifacts. These four authors engaged in deep and prolonged data immersion and discussion, independent open coding followed by memo writing to generate themes [19], and engaged in collaborative discussions in multiple rounds to compare their interpretations of the data reflexively and finalize themes. --- Limitations Several limitations of our study methodology should be considered when interpreting this work. First, it carries with it the standard issues attendant with qualitative methodologies and group interviews. We conducted the research in only three US cities with only one group per industry, and our small sample was not statistically representative of the roles or demographics of the professional fields we explored. Our findings should be viewed as a deep exploration of our participants' perspectives on their industries, but should not be taken as generalizing to their industries as a whole. Second, our choice of teaching content and activities, while appropriate to our research objectives, may have influenced participants, although we tried to minimize any effects through participant-led discussion. Finally, while we confirmed our understanding of participants' comments during the sessions, and one or more of the authors have experience in most of the industries represented, our interpretations may lack context or nuance that would have been more readily available to members of the same professional categories. --- FINDINGS In this section, we present our findings. We begin by describing participants' expectations of generative AI's impact, including a dominant narrative that emerged across the groups. We then turn to four current social forces that shaped our participants' perspectives on how generative AI will affect their industries: deskilling, dehumanization, disconnection, and disinformation. Finally, we describe some of the unique perspectives within particular knowledge industries. --- Expected Impact of Generative AI In this subsection, we give an overview of participants' perspectives on how generative AI may impact their industries. 4 We begin by briefly summarizing the participants' dominant narrative regarding generative AI, which we term the Effort-Saving Tool Narrative . For ease of comparison, we also briefly summarize the Transformative 4 We note their perspectives were informed by content in the workshop as well as their prior exposure. Almost all participants had heard of generative AI or a specific generative AI system, e.g., in the press or from colleagues, friends, or family. Many participants also had experience using an app, especially ChatGPT or Midjourney, at least lightly, for either personal purposes or professional purposes . Narrative and the Sensationalized Narrative introduced in Sections 2.3 and 2.4, respectively. We introduce these narratives to situate participant data against extant meaning-making discourses, in alignment with reflexive thematic analysis [24, p.211]. In the remainder of this subsection, we describe the Effort-Saving Tool Narrative in more detail. "Within law firms there's some work product that you produce that takes a lot of thought and decision making and some creativity to really draft something that is unique in the way that you put it together. There's other Effort-Saving Tool Narrative -Participants largely envision that it will be realistic and desirable to use generative AI as a tool to perform menial work, subject to human review. Further, they believe that existing guardrails in their industries can be leveraged and augmented to perform such review. They expect certain roles will be impacted, but for most industries they do not anticipate substantial transformation or elimination of a wide range of jobs. In most cases, participants did not anticipate change as broad as that predicted in the Transformative Narrative or the Sensationalized Narrative. Transformative Narrative -Typically shared in technical reports 6 or peer-reviewed papers by think tanks, consulting firms, or academic groups, and echoed in some rigorous news reports, this narrative takes an analytic approach to arguing that generative AI will have broad, substantial impact across industries, jobs, and tasks. Work in this vein often includes projections , outlining a range of possibilities, but usually including an exceedingly high upper bound for such estimates. While this narrative is not as catastrophic or exaggerated as the Sensationalized Narrative, it usually predicts transformative change. Sensationalized Narrative -A predominant discourse in social media and many news articles, this narrative makes dramatic or even hyperbolic claims about how generative AI will change/replace human labor in the future. It often draws comparisons between generative AI and previous historic innovations or even "the big bang" . In some cases this narrative originates from leading figures in AI [101] who are arguably framing evocative messages for the lay public. However, it often misrepresents expert opinion or incorporates evidence from sources such as the Transformative Narrative in misleading ways and emphasizes anxiety-inducing worst case scenarios [118]. Participants were typically enthusiastic about offloading these tasks to generative AI, both because they were tedious and because they felt the time saved would allow them to focus on more meaningful, human aspects of their work. For example, in the mental health group, participants were attracted to the idea of generative AI taking over rote work, such as note-taking or patient intake, thereby increasing their efficiency and freeing up more time to focus on interpersonal work with their clients. "I wish AI could help us with note-taking in any way. That's the part of our job that I hate the most. It's so laborious... it takes up so much of our time and if there's anything that could support us with that, that would be amazing. I don't know how it would do that, but if it could, I would cry for joy. " -M1 Further, some greatly valued the ability to scale and handle high volumes of work. For example, B6 had recently sent out 350,000 emails over a three month period, and imagined generative AI could increase consistency and speed. 4.1.2 Generative AI is a useful tool: "Maybe a helpmate but certainly not in control". Consistent with their orientation to the use of generative AI for busywork, participants dominantly oriented to generative AI as a tool for human workers. For example, some participants specified that generative "AI tools should assist" employees, but not "do" their work. Many emphasized generative AI should not go beyond tool status and perform certain kinds of knowledge work, e.g., generative AI should not be used for decision-making, setting strategic direction, or forming human connections. "I think to use Gen AI as a tool to assist anyone in the legal system is not necessarily a bad thing so long as it remains just that -a tool. " -L5 "For tedious tasks, and time consuming aspects of the business, allow yourself to lean on an AI for support. But always push yourself and your team to be the creative ones. " -A1 4.1.3 Generative AI requires human review: "You're still gonna need someone to follow up and review it". Participants in all groups were concerned about generative AI's potential to make mistakes or produce undesirable output. For example, participants in the advertising and business groups shared concern that generated content might violate brand standards or copyright, and lawyers spoke of the need to attest to the accuracy of legal documents. "I don't trust it yet because in law of course we gotta go to deep research to make certain that the sources are correct before we would ever use it. " -L2 "You can't trust that it's accurate. So [that makes] me hesitant about using it going forward for anything that I take seriously. " -L1 One participant recounted a news story in which an attorney submitted a legal brief created with ChatGPT, which included "hallucinated" cases: L6: I would've won so many cases if I could have done that... L7: That's right. Exactly. And then this guy, the attorney that did it, he's kind of... a national joke. L7: I haven't Accordingly, the overwhelming sentiment in all groups was that humans would need to check most or all of generative AI's output to ensure its quality. In some fields, participants stipulated that this check must be performed by a qualified professional. "The AI would produce something, and then we would say yay or nay. " -J5 "[Generative AI code] should never be something that goes to production without human testing and review. " -S7 "Generative AI can assist attorneys... but all its work must be proofed by an individual licensed to practice law. " -L7 "A lot of this problematic stuff that we're hypothesizing is coming out of this idea that AI could be this standalone [advertising] team doing every single thing from start to finish and releasing this ad without consent of anyone or something like that. Which I think is still pretty sci-fi. there will be I think a big chunk of those jobs that just kind of go away. Like the simple headshot photographers or the person who does passport photos, things like that. I don't know. At this point it seems a little bleak and ambiguous, but I could see it happening... I feel like the working jobs, the creative jobs, the smaller ones will go away and then it'll just be more top heavy companies where they're overseeing the technology. A5: Isn't that just evolution? We went from no machines to machinery and people lost millions of jobs. Is it just not a part of evolving? I feel like there probably are still some human aspects that could go into AI... Is there a human there to say, 'Oh, this isn't right. '? Maybe like quality control. Some also emphasized that knowledge workers need to adapt and "reformat their skillset" in order to stay relevant and not get left behind. " workers in entry-level or similar roles in their industry were affected, they personally were somewhat unlikely to experience significant impact, feeling confident, for example, that generative AI is unlikely to be skilled enough to replace human professionals within the timeframe of their own careers. While some did acknowledge the potential for industry-wide shifts, particularly in software development, advertising, or business, sentiment such as the following was generally the exception rather than the rule: "As a communicator, it's a little intimidating how good the bot writers can be, and I know our Bangalore teams are looking at hiring bots or putting them in place rather than hiring humans. So, there's an excitement factor in terms of cost savings as an executive, but also an intimidation factor as a communicator to see that a whole industry of journalists or people who major in [communications] could be replaced to a degree. So, I think specific jobs, organizations, entire industries, the world will be affected by that. " -B4 "Writers could start to become obsolete in journalism [because of generative AI]... We could potentially be pushed to the back burner and not relied upon whatsoever to produce content. To be frank, I'm very afraid and believe that I may have to change career paths soon. " -J5 Further, participants in most industries appeared to feel other forces are more disruptive than generative AI. We discuss some of these in Section 4.2 and we discuss industry-specific expectations further in Section 4.3. At the same time, participants were somewhat uncertain about both their own projections as well as broader narratives. To bolster their reasoning, they sometimes drew analogies to other resources such as the internet, Google Search, or industry-specific databases, such as LexisNexis in the legal profession. "I see certain things as tools that replaced old tools. This is not really a new tool. I think it's just an enhancement of something we already have. For example... Lexis. " -L7 Relative to the Sensationalized Narrative or even the more moderate Transformative Narrative, participants tended to take an even more tempered view of likely changes in their respective industries. While some participants were aware of media claims, they approached them with skepticism. In some cases, they literally discounted these narratives by proposing adjustments to reported statistics. At the same time, they were uncertain, and sometimes found the claims worrying even when not fully convinced. "I've seen it time and time again where everybody [in media companies] runs to the same goal post. Like the pivot to video is the most famous one. It's exhausting as a journalist and we watch all these people get laid off and then a year later they say, 'Never mind, it wasn't the answer we thought it was, back to where we started. ' <J2 interjects: [Remember] podcasting? <laugh» Yeah, exactly. We've all seen it. So I don't know how ChatGPT shakes out in this whole picture, but I do think it's probably not as insane as some people are making it in terms of the tidal changes that it'll make to our industry. But it is probably like 60% of the way there. " -J3 L4: Goldman Sachs says that AI will take over three hundred million jobs away in the coming years.7 L7: Three hundred million? L4: Three hundred million. Moderator: Do you find that plausible? L7: So all of the US will be laid off. We asked participants to describe important changes coming in their industry over the next one to two years. While some of these changes were driven by generative AI, many were not. For example, mental health professionals expect broader legalization of psychedelics will be transformative in their field. In some cases, participants expected that generative AI would interact with other changes to bring about particular impacts on their industry. Most prominently, four existing global and national forces framed participants expectations of generative AI's impact. Specifically, they anticipated generative AI would amplify the following social forces [39,159] that shape how knowledge workers do their work in their industries: deskilling, dehumanization, disconnection, and disinformation. --- L4: --- Deskilling: The "Uberfication" of Knowledge Work. Independent of generative AI, several forces are actively shifting revenue and employment in certain knowledge industries. For example, the emergence of remote services, such as BetterHelp and LegalZoom, was accelerated on both the supply and demand side by COVID-19, as both providers and clients often desired virtual sessions. Such services provide clients easier access to mental health, legal, or other advice. While these services may reach a larger number of people at a more affordable price, participants argued they provide lower quality service and undermine demand for highly trained professionals. M6 described how telehealth tech startups pose a greater employment threat than generative AI: "Things like BetterHelp are the biggest threat to our job security/livelihood. Because I think it's an Uberfication of mental health care. One of the things that BetterHelp really advertises is you can text with your therapist, but their actual guidelines are you will get a text a day from your therapist and they have like 24 hours to respond. So people who are using [BetterHelp] are actually not that happy with it. " -M6 The "Uberfication" of work refers to a broader shift in the political economy of how work is "arranged through the use of digital technologies...to create 'platform capitalism'" [141, p. 61]. A key element of these labor conditions is supplanting employees with precarious self-employed workers [64]. Participants, especially lawyers and journalists, reflected on a significant shift towards contract, freelance, or "permalance" positions . Driven by bursty demand for expertise as well as cost-saving measures, these positions are precarious and do not offer the same benefits as full-time employment [160]. Generative AI may interact with these extant labor trends to further deskill industries or reduce opportunities for highly trained professionals. For example, generative AI could propose advice that less trained human workers could pass on to clients in cheaper online services. Some participants also speculated that non-permanent employment would be easier to replace with generative AI than full-time positions, since few employment protections would be in place. Participants also drew connections between the gig economy, generative AI, and white collar work. J2 has worked as a labor journalist, previously writing about the impact of technology on blue-collar workers. With the rise of generative AI over the past year, his concerns have become more immediate and personal. "As a worker myself... I wasn't too worried about [the digitization of the workspace] until now because, you know, they're digitizing a factory or whatever... now it's actually starting to impact my industry. And all those questions that were arising about factory workers or Uber drivers or whatever, they're starting to be asked about us [journalists] now, to apply to us as well. " -J2 Participants discussed how these forces can affect their wages and opportunities. For example, M6 discussed how these changes could reduce his wages or even undermine his investment in a specialized career, while also reducing quality of service: "I'm up to my neck in student loans still. Part of me is like, 'Well, it's great if BetterHelp can be affordable for people.' But what about my job security, my ability to pay off my student loans? ... And then the idea of generative AI doing chat... I do have concerns that this could be used in a way that would undermine our job security, but also I think actually not provide the level of care for the consumer as well. " -M6 S7 shared similar concerns that generative AI might make her hard-won skills less valuable and set back her career: "I didn't really know that I as a girl with a non-traditional background could be an engineer. And I also saw it as a way to make a lot more money than I was making. It was definitely a way for me to double, triple my salary and have access to a life that I could only dream of. And so it is kind of a bummer, even though it feels like a very first world problem, to be like, 'Oh, now I'm back to potentially in a few years not having a very valuable skillset or a skillset that everyone can have'... It is a little disheartening to be like, 'Oh, finally ahead. I'm finally at this place I wanted it to be, ' and now I have to play catch up again. " -S7 Participants also expressed concern that, given its fitness for menial tasks, generative AI might eliminate many entry-level positions and therefore remove pathways into more senior roles: ... "I --- L2: Bypass the expense of ... A person who's not been involved in litigation has no idea all the things we have to do to protect them... AI can't do that... These issues around deskilling, menial work, the gig-economy, and more widespread job loss all arise in the context of broader economic uncertainty in the US. Participants, at times, speculated on how these changes to jobs could potentially be addressed through social welfare approaches like universal basic income, but were not confident governments or employers would meaningfully address these labor changes, or other ethical issues related to generative AI, due to their position operating within a capitalist environment: S7: I just feel like under capitalism there can be no good AI... I'm still figuring out how I truly feel about AI. I don't think it's inherently evil or bad, but I think when you're talking about using it at a company or corporation, the people at the top are always concerned about making a profit and cutting out jobs and whatnot. So I just am like, the end stage of this, no matter how good it can be, it just always feels really icky and bad. S5: Ideally it's like the Star Trek universe. S7: Yeah. S5: I'd be into that <S7 laughs>, nobody has to work. We're all like happy. S7: That would be great. But then there's no one at the top getting all the profit. S5: Yeah. --- Dehumanization: "Whose job will it be to find out how to incorporate human nature into the AI?". Replacing human workers with algorithms raises concerns about dehumanization in which the task loses a characteristic of how humans interact with each other [59], becoming more sterile and impersonal [139]. Many participants expressed concern that use of generative AI might in various ways lead to a loss of humanity. One issue was that they felt generative AI does not have the capacity to perform interpersonal work. For example, business communications professionals spoke of the importance of personal touch and authentic human communication, and mental health professionals voiced that generative AI can not establish human-to-human rapport, which is required for therapy to be effective. Participants also spoke of how the public's aesthetic standards could shift over time to less natural or less human content. "I believe when AI created content begin to seep into our feeds, we will eventually accept its strange aesthetic as normal. This could create an entire new aesthetic of digital media as we know it. I'm concerned. " -J8 Participants also feel joy in performing tasks themselves, an emotional experience not provided by using generative AI. The introduction of algorithmic technologies into organizations in dehumanizing ways erodes workers' sense of autonomy [113,137] and overall job satisfaction [92]. Participants echoed these insights in the context of the introduction of generative AI into their industries. "From the photography viewpoint for me, they're replacing actual picture taking with just generating in AI, and the whole reason I like photography is I like holding a camera. I like framing it. I like the interaction with the person and all that. And generative AI just completely gets rid of all that. The human connection.. --- . " -J6 "There are folks who get their joy and their sense of meaning from writing code and that's kind of their thing... these are people that spent years mastering this trade. And I think that's where it gets sad. " -S2 Moderator: So should [an undergraduate] go get a computer science degree where they're going to do a lot of math and code, or should they go to some prompt engineering boot camp? ... --- S2: Well, it's the same question in the context of art school. Should someone go and learn how to paint and all the intricacies involved there? Or is it futile because a computer can spit out that painting for you? I don't think there's a right answer. ... S1: I don't think it's a bad time at all for a computer science degree yet. At least for our generation or in my opinion even the next one. ... --- S2: If someone wants to go to school for computer science, because... learning and tinkering is fun, they should do it and don't let the robots stop them. Participants also spoke of the value of human production and creativity. Their comments were reminiscent of the Arts & Crafts movement, which challenged the integrity of mass produced objects, associating them with dehumanization and a decline in production standards [122]. The Arts & Crafts movement valued artisanal production and human craft, even though it was less efficient than the mass production that became common in the industrial age. Similarly, participants spoke about the value of making things by hand, and the high quality and meaning of human-created objects. "I think that there's potential for real things outside of screens, like actual photos in an album. Like who makes those anymore? But I think things like that would become almost like sacred and similar to... the dumb phone movement straying away from smartphones and screens. I think that there's gonna be at some point a push towards that. Like the things you could physically touch and make with your hands... Wanting the personal touch of knowing it took time and intention and purpose and thoughtfulness to create something. I think there's real beauty in that. And I guess I don't really see the beauty in AI. " -A1 "I think a lot of skilled labor and artisan style jobs are going to come back into fashion and it'll be very hard for AI to replace them. My boyfriend went to furniture school recently and so he's a carpenter and a fine furniture maker. And I was just kind of musing about how right now I'm the breadwinner, but I think soon he might be the breadwinner because there's something about something that's made by hand, something that is of quality. We live in this fast fashion age where things are just mass produced. And I think there's a real desire for something that is made with care that will last, that is creative... It takes a lot of personal skill and a lot of physical labor to make a table. . . and I have a hard time thinking that generative AI is really gonna be able to replace that. " -S7 Some participants emphasized that people must retain critical thinking skills as well as the ability to do the type of work that is being offloaded onto generative AI, and that people must not become lazy. "I hope my daughter never discovers [ChatGPT]... I don't want it to take away creativity. I want my daughter to think of her own thoughts. Like when she's doing her studies. That's really important to me. " -L3 E6: It's kind of like a calculator, right? They have to learn how to do the math first before they're allowed to just do it all using the calculator. ... --- E7: You get this Chat thing and all this AI [that] just makes kids' lives easier. And people are like, 'Well what's wrong with that?' There's a lot wrong with that <E2 laughs>. "I also worry about the generational effects of generative AI. Because I think at the very base, it's another way for people to use their brains less. " -J8 "I think you need to actively not be lazy, because it's so helpful. You need to say, 'Okay, I'm not gonna just let this responsibility fall onto this machine. I'm gonna still really play the biggest role here... '" -A1 At the same time, some recognized a delicate line between being lazy and being efficient, and therefore saving time for more meaningful activities. A7: I assume I'd be pretty lazy right now if I was in high school and I would just try to take advantage of [generative AI] as much as I could... You can literally not learn for the rest of your life if you don't want to now. ... --- A6: Is it lazy or is it clever? Are you learning the long way or are you streamlining the process? Several participants offered a literal mechanism for retaining humanity in the use of generative AI, proposing a specific quota, such as "at least 80% of words, photos, everything on our site is created by human minds" or "60% [of the job] needs to be conducted by humans. " 4.2.3 Disconnection: "You have to be there". Feelings of disconnection can manifest in knowledge work, e.g., new tech-mediated working arrangements can increase feelings of worker isolation [152] and broader social trends in perceived isolation can exert external pressures on knowledge work practice [55]. Participants spoke about increased social disconnection, such as the so-called "loneliness epidemic" [117] stemming in part from COVID-19 and the corresponding social isolation and increase in remote/virtual work and schooling, as well as other factors, such as escalating phone and social media use or even addiction. They were concerned that generative AI would contribute to further disconnection from physical and social experiences. For example, mental health professionals worried that therapeutic uses of generative AI could have the opposite of intended effects and exacerbate loneliness, even as patients turned to it for comfort: "I hear people talking about the uses of AI [for mental health] in the chatbot model, and to a great extent, I'm wildly against it. Because I think that a big part of the reason we have the increase in people needing mental health services is because of disconnection. And I think that while it might really have helped your client in that moment to have a bot to chat to, it's still not a replacement for human interaction. And I think that that disconnection is a lot of what is driving this [mental health] crisis. " -M6 Our participants also described how good journalism requires being "on the ground" and how the extant rise in freelance and remote work threatens the quality of news production, a trend that could be further exacerbated by the use of generative AI. J8 characterized the future of journalism with generative AI as "just more recycled and disconnected from reality, allowing people more to be separated from what they're doing instead of fully immersed in it. " Participants also expressed concern about content becoming increasingly disconnected from reality over time as new generative AIs are trained on layer upon layer of recycled, increasingly non-human content. "As journalists, when we cover things, we are currently on the ground doing reporting, talking to human beings and things like that. And more and more it's becoming like we're talking to people over Zoom or we're not being flown out to places to cover things anymore and we're hiring freelancers... We get a bunch of footage from overseas and then we have to make something from it because we've hired freelancers there instead of "I think it's a lot of dumbing down and recycling that's happening. I'm worried about the long-term effects of recycling things that have been recycled already with text and images and what that will do to ideas. " -J8 4.2.4 Disinformation: "I think they call them hallucinations... it puts it out so authoritatively". Disinformation is the intentional spread of false, inaccurate, or misleading information designed to intentionally cause public harm [74], and its production is often motivated by "ideology, money, and/or status and attention" [103, p. 27]. Participants expressed concern about the role generative AI may play in disinformation, and more broadly in the production of low quality content, particularly in the broader context of US discussion of polarization, culture wars, and media bias. One concern was that since generative AI typically incorporates internet data in its training, its quality would be undermined by existing low-quality content on the internet: "It can't just pull it from the internet as it is right now because there's so much misinformation pre-AI. " -J7 "I [am] really concerned about AI learning things all over the internet, just chaotic. But if there's an AI that collects everything from [an authoritative source]... I'll be really excited about it. " -A4 "Can you tell me where all of these images came from? Can you tell me where you got this person's blog post? Who is that person that wrote it? A lot of stuff on the internet, especially with Reddit and things, are anonymous and that's the good and bad thing about them. And so as you put them out into the world, into this product that everyone can use, how do you even begin to really make sure it's unbiased?" -S7 Participants further expressed concern that generative AI would itself proliferate, or would be leveraged to produce, additional low-quality content. They were worried about misinformation and disinformation currently in the media and online, and saw generative AI as a tool that could make it even easier and faster to produce harmful content. They anticipate this will have significant negative social and political repercussions: Participants were also concerned that media upheaval, caused by wide-ranging factors such as economic or political instability, is currently driving news aggregation and media concentration, thereby eroding the quality of information available to the public and "dumbing down", homogenizing, or politicizing content. They were concerned generative AI could exacerbate these issues. J2 spoke of media trends that started in Europe: "[A change] that is not caused by generative AI, but that will be made worse probably by it is the further media concentration. That's caused by billionaires buying media left and right, agglomerating them and altering and then sometimes increasing their editorial lines, to their benefit... [as] a tool for [their] political agenda... I believe generative AI might make it even worse because all of a sudden these people will be able to buy and concentrate media. If anyone has any ethics issues, well, you know, the door's right there. And generative AI can do it for even cheaper anyway... just buy a newspaper, fire the entire newsroom... we'll just run it by generative AI. " -J2 --- Industry Perspectives In the previous sections, we drew themes across the industries we studied. In this section, we provide additional detail for each industry. To illustrate the most salient points and capture the unique character of each discussion, we created composites consistent with the content, language, and tone of responses we received from each group [40,161], shown in Table 3. These composites illustrate how themes discussed in previous sections play out across industries; for example, which roles participants think are most likely to be affected in their industry. --- DISCUSSION In this section, we build on our participants' insights to surface key HCI research questions at the intersection of generative AI and knowledge work. Rather than focusing on current user practice or design recommendations for early versions of these systems, we seek to frame larger questions about how generative AI's impact may be understood and shaped. Some of this work falls squarely in the purview of HCI, but much of it likely requires broader collaboration with other academic disciplines as well as stakeholders such as knowledge workers, policy makers, civil society, and more. --- Research Challenges: Human-in-the-Loop Participants overwhelmingly favored a human-in-the-loop approach as a necessary and sufficient remediation for many problems with generative AI. For example, they expected that in their industries, human reviewers could check all of generative AI's output and correct any inaccuracies or other quality issues. Legal scholars report that regulators have a similar inclination to address a "panoply" of concerns about AI with a "slap a human in it" approach [41]. However, the HITL literature points out serious, unsolved practical problems that were not apparent to participants. For example, humans make many errors when reviewing algorithmic output and often override algorithms in detrimental ways; therefore, human oversight policies can provide a false sense of security rather than improving outcomes overall [29,66]. Further, research has shown that effectively configuring human-AI coordination is extremely difficult, so handoffs are often poorly designed and yield harmful results [41,53,67]. Additionally, scholars argue that HITL approaches disproportionately hold humans accountable, even when in practice they have very little influence or do not have appropriate skills or time to review, leaving reviewers to shoulder the blame for technical or structural failures [13,41,48,53,67,155]. This suggests two main research questions: #1. How do we raise awareness of the limitations of HITL? We believe it is important to exercise caution when applying HITL solutions. Those who are making decisions about mitigations for generative AI should be aware of the current limitations of HITL so they are not overly optimistic about its potential to remediate problems. #2. How do we make better HITL systems? Despite its current limitations, many opportunities exist to improve HITL's effectiveness. The advent of generative AI particularly highlights the need for HITL approaches that will work well at scale for review of generated text, images, and video. Also, cognitive forcing interventions can engage analytical rather than heuristic thinking [29]. Further, --- Mental Health deskilling dehumanization disconnection We work on one of society's most important problems: mental health is a national crisis, exacerbated by loneliness. There is a huge shortage of qualified mental health professionals, and therapy requires human-to-human connection which a computer cannot provide, so we feel our jobs are extremely secure against generative AI. However, we worry about telehealth services that provide low quality service but still undermine our job security. Generative AI may be a useful tool for administrative work, simulated talk-therapy training, or as a stopgap when people can't immediately access a human therapist. --- Education human review dehumanization disconnection We don't expect generative AI to affect our work deeply. Students already have many ways of cheating, and generative AI is just a newer, better way that we deal with by reviewing their work. We're worried that generative AI may make students lazy if it's not used well. We're also sensitive to overuse of technology and social isolation because of remote schooling during the pandemic. We're in the middle of restructuring much of our curriculum due to standards-based grading. This gives us more latitude to design new activities, so maybe there are opportunities to work generative AI into our lessons. --- Law menial tasks tool human review Generative AI is a tool that can perform formulaic legal tasks like drafting or research, provided its work is reviewed by qualified attorneys, replacing entry-level and support positions and contributing to job loss. We've heard press stories about false citations, which makes us wary of using it, even with human oversight. Generative AI is unlikely to be skilled enough to replace human professionals, and it should never make decisions, or act as lawyer, judge, or jury. However, we anticipate clients will begin to use generative AI tools to do some legal work for themselves, with poor results. --- Journalism deskilling dehumanization disinformation We work in a noble profession and we are already embattled on many fronts, facing misinformation, news aggregators, decreased revenue streams, and more. Generative AI will exacerbate ongoing precarious/freelance employment, poor wages, and job loss, and the erosion of journalism as a field. Generative AI makes it easier for low-quality providers to produce mis/disinformation, with harmful effects on society. Generative AI may replace some of the work we most enjoy, like writing or photojournalism. --- Business Communications tool human review need to adapt Generative AI can do a lot of our work, and it will inevitably be adopted in our industry. Companies and employees need to adapt, learn, and "grow with AI" in order to stay relevant and protect job security. Much of our writing is formulaic, and generative AI is a great tool to produce early drafts of routine, high-volume, and/or disposable communications like email notifications. Generative AI will need a lot of human oversight and guidance, especially to make sure it meets compliance and legal standards and is consistent with our brand voice. We can leverage our existing approval structures to check its output. --- Advertising tool human review lose certain jobs Generative AI is well suited to many tasks in the advertising industry. Sadly, certain jobs will go away, most immediately those related to product photography and video production, with layout and copywriting under heavy threat. However, creative oversight will remain in the human purview, and generative AI will be an exciting brainstorming tool. A human check will be required for most generative AI output, especially to make sure it doesn't violate brand standards or copyright. In fact, new teams may be created to do these checks. Our industry promotes and embraces change, but we also value human craft and we often challenge the quality of digitally produced content and experiences. --- Software Development need to adapt deskilling dehumanization Generative AI is likely to drive change across most aspects of software development. This includes, for example, the automation of menial and boring tasks, the elimination of many entry-level roles , and the ability of AI to reshape low-code technology solutions and even write production-ready code within serious software companies. We feel uncertainty about the future and pressure to adapt to new advances in generative AI. Some of us worry that the time and education we invested in a lucrative career might be nullified, setting us back to where we started. Some people get into software engineering for the money, but some of us are motivated by our love of tinkering and problem solving, and generative AI may take over a lot of the work that brings us joy. Table 3. Composites illustrating the most salient points of participants' expectations regarding the impact of generative AI on their industry, with industries ordered top to bottom by least to greatest expected impact. Under the name of each industry we list the top three most salient themes , Neither the top three themes nor the composites comprehensively represent all points, e.g., although deskilling was discussed extensively in Law, it was not one of the top three. Responsible AI product assessments often center on anticipating harms from launching a product [21,25] through consideration of how their contextual use may engender harms to users and communities [6]. Many responsible AI interventions orient towards developer-facing interventions, such as developing AI principles [57], educating practitioners to foster ethical awareness [93], and moderating generative AI systems through training data mitigations, in-model controls , and safety classifiers that gate outputs [69]. However, limited attention has focused on assessing communities' desired forms and uses of AI. Meaningfully engaging communities of practice [158] in exploration of these questions, such as through community-based participatory research [37,71], can inform development of practices that scaffold community engagement into responsible AI practice. Similarly, in a labor context, complicated questions arise regarding how to assess the impact of AI systems that may reduce or drastically change specific jobs. While companies may often consider the needs of an overall business, how should they consider the needs of individual workers, particularly those whose jobs may be cut or changed, and are there ways to directly engage them for input before developing these AI solutions? And how can we educate or support decision-makers in engaging stakeholders? --- CONCLUSIONS The historical context in which technology appears influences how it is ultimately adopted and used. Generative AI has become more available and visible during the confluence of a global pandemic, economic uncertainty, and more. Many knowledge workers in our study situated generative AI in this context, highlighting generative AI as exacerbating the following four social forces: deskilling, dehumanization, disconnection, and disinformation. In other words, rather than seeing generative AI as an independent disruptor of their work or their industry, they positioned generative AI as extending and exacerbating existing forces. This framing raises important new questions and opportunities for exploring and shaping the future impact of generative AI on knowledge workers and their industries. --- B INTRODUCTION TO GENERATIVE AI A key component of our workshops was offering participants a foundation for thinking about generative AI. To this end, early in each workshop we led an education section about 40 minutes in length. We began with a 20-minute presentation covering: • a shared definition of AI • a very condensed history of AI and generative AI, focusing on key concepts like the early aims in developing AI • a brief, non-technical explanation of what has changed recently with transformer models and LLMs • 15 key concepts regarding characteristics, benefits, and risks of generative AI systems, to refer to throughout the workshop Participants were encouraged to ask questions at any point during the presentation, and then we spent an additional 20 minutes on further questions and discussion. In each workshop, the presentation and Q&A was led by one of two authors, both of whom are researchers who work in AI. We lightly customized materials to each group's industry. The definition we provided is: "Artificial Intelligence is the ability of a computer or a machine to think or learn, " and we provided additional color on how we think about the terms: "computer or machine, " "think, " and "learn. " The 15 concepts we shared include the following: Bias -Generative AI tools may reflect social biases that are present in their training data Bland -Generative AI often generates "flat" or generic text, unless explicitly directed to do otherwise Brainstorming -Generative AI tools can create outlines, lists, drafts, possible solutions, and more Emergent Properties -Generative AI models may seem to possess abilities they were not designed to have Falsehoods -Generative AI can fabricate information or sources, or get facts wrong, yet seem confident and compelling Grammatical -Content generated by text-based Generative AI tools can be well-written, using good syntax and avoiding typos Identifies Tacit Structure -Generative AI can uncover steps and processes which were not previously articulated Memorization/Privacy Breaches -Generative AI may generate content identical to its training data Mimicry -Generative AI can be asked to mimic genre, tone, phrasing, visual style, or more Non-Deterministic -Generative AI models can give responses which are variable, not consistent. This means that when users input the same or similar prompts, the system may not respond in the same way Provenance Is Unclear -Generative AI tools may not be able to reliably trace specific content back to a direct source in training data Remixes -Generative AI always generates content based on its training data. It can recombine data in unique ways, but is limited to re-mixing training data Safety Not Guaranteed -Generative AI tools may have built-in safety systems to attempt to prevent certain types of content or topics, but these are not infallible Scale / Speed -Generative AI, like other AI and ML systems, is able to consider large amounts of data and handle many tasks, over and over again, extremely quickly Tweakable -Through "prompt engineering, " Generative AI tools can often be influenced to generate content in a certain way --- reviewers could receive specialized training in critically reviewing generative AI's output; these review skills might ultimately be at least as valuable as other skills like prompt engineering. Overall, while improved solutions are unlikely to alleviate all concerns with HITL [67], additional research and innovative design and development could lead to substantially better outcomes. --- Research Challenges: Knowledge Worker Expectations of Impact We observed a significant gap between participants' expectations of how generative AI might change their field versus broader narratives of disruption offered by the media, technologists, and academics-participants generally took a more limited view of potential impact. This suggests the following research questions: #3 Why do some knowledge workers feel they will be largely unaffected by generative AI? Are there certain workers who feel more immune to changes from generative AI? Is this due to a failure of imagination specialized; perhaps developing and sharing industry-specific demos would be helpful. --- Research Challenges: Deskilling As technologies shift how work is performed, the necessary skill sets of existing and new professions also shift [1], creating a paradox of concurrent unemployment and labor shortages [12]. Consistent with this, participants expressed concern that generative AI might negatively affect the value and development of their skills. This suggests the following research questions: #5 What training would benefit knowledge workers, to help them reskill for likely changes? How can we help knowledge workers adapt to the possibility of wide-ranging transformation when there is not yet clarity? Beyond prompt engineering, training for human review of AI output seems promising, as does development of critical thinking skills and the ability to manage complex, higher-level use cases. Additionally, conventional national policy responses to labor market changes often rely on traditional, secondary education [36], which is often inadequate against the pace of technological change [145]. Reskilling within individual organizations is a more cost-effective and worker-centric strategy to evolve with technological changes in knowledge work [75]. Future work could explore how to re-or up-skill workers in ways that minimize precarity. #6 How can we scaffold people into higher level positions, if entry-level pathways disappear? Who might be most disadvantaged by the potential erosion of entry-level positions? And in industries that rely on the development of foundational skills, how might the most talented experts arise if they do not work through entry-level tasks? What systems might support human development of necessary skills, even if generative AI can perform those tasks? --- Research Challenges: Dehumanization As a research community primarily studying the interface between humans and computer systems, understanding where these systems are uncomfortably and unacceptably encroaching on people's inherent sense of humanity should be a key research agenda. To explore dehumanization, we suggest two avenues of inquiry to address the serious concerns raised by our participants: #7 How can we learn about and protect tasks that bring people joy and meaning, but generative AI can do as well or better than humans? We should understand not just, as many economists focus on , which tasks can be effectively be replaced by AI systems, but also which tasks inherently bring joy and reinforce humanity. By beginning to document these tasks that are meaningful but in some cases can be done as well, or even at higher quality or more efficiently, by AI systems, we can begin to design ways to protect these tasks, or at least make reasoned decisions about when to automate them. This may involve HCI interventions to design systems that highlight or reserve certain work or decisions for humans, or broader questions of regulation to protect certain kinds of work across society. While many utopian visions involve humans who are freed from work by AI and take up new, creative, artistic endeavors, losing the craft, joy, and humanity that exists in current work and problem-solving may have significant downsides. #8 How can we promote critical thinking and also prevent people from becoming lazy? Separate from the joy and meaning inherent in certain tasks, our participants also found innate value in performing reasoning and critical thinking. Participants were concerned that AI could reduce or remove tasks that challenge humans or force them to solve problems, thereby leading to a lazy society. While considerable work within the HCI community has explored how human and AI systems can collaborate for higher quality and more creative outcomes [34,76,99,146,153], this question encourages research into how these collaborations can be designed to support knowledge workers [63], as well as the design of systems that directly engage people in problem solving rather than simply providing answers. --- Research Challenges: Guardrails Responsible AI has gained significant traction, from published principles to more actionable and/or measurable strategies. While some frequent issues, like explainability and fairness, move closer to having concrete remediations in system design, other broad social issues require additional work beyond engineering and system design, including harms assessments, stakeholder engagements, and increased focus on community outcomes. Considering the social factors prioritized by participants in our study, current responsibility metrics are most relevant to disinformation, and have less to offer regarding deskilling, dehumanization, and disconnection. This leads us to the following questions: #9 How can we design responsible AI and governance approaches to generative AI that embrace complex global dynamics? How can responsible AI more fully and holistically consider impact and harm? Findings from our study highlight the ways in which the impacts of generative AI must be considered with respect to social forces that intersect with conditions of its development, deployment, and use. Thus, an approach to harm or impact analysis that stops at model evaluations will be insufficient. Table 5. Details about the location and participant makeup of each industry group. Gender was self-reported by participants from a range of options, including the option to self-describe. Recruitment was limited to participants who had been in their industry for two or more years. --- C ARTIFACTS
Generative AI is expected to have transformative effects in multiple knowledge industries. To better understand how knowledge workers expect generative AI may affect their industries in the future, we conducted participatory research workshops for seven different industries, with a total of 54 participants across three US cities. We describe participants' expectations of generative AI's impact, including a dominant narrative that cut across the groups' discourse: participants largely envision generative AI as a tool to perform menial work, under human review. Participants do not generally anticipate the disruptive changes to knowledge industries currently projected in common media and academic narratives. Participants do however envision generative AI may amplify four social forces currently shaping their industries: deskilling, dehumanization, disconnection, and disinformation. We describe these forces, and then we provide additional detail regarding attitudes in specific knowledge industries. We conclude with a discussion of implications and research challenges for the HCI community.
Introduction Social capital is increasingly being recognised as important for health and mental well-being . It is also increasingly being articulated as a useful concept for social work , particularly as it could assist in the development of new social interventions, which may support an individual's recovery from a mental health problem. Defined by Lin and others as the resources that are embedded within social networks , social capital can lead to greater occupational prestige, income and political influence when mobilised . This conception is an extension of the social network theory and emphasises the importance of network members' resources, such as wealth, power and status, to an individual. It differs from communitarian notions of social capital , in that its benefits accrue to individuals rather than to groups. Lin suggested that individuals can anticipate returns from their investment in social relationships through four mechanisms, which may improve their mental health. First, the provision of expert information from network members about the most effective interventions or health behaviours or employment opportunities , for example, can promote recovery. Second, the power and authority of network members may exert a similar influence on health that individually possessed power and social ordering has on exposure and vulnerability to health risks . Alternatively, the material resources of network members such as cheap loans to an individual with mental health problems, whose own resources are depleted through unemployment or long-term sick leave, could help alleviate debt or provide new opportunities . Third, network members' resources may act as social credentials and, in this manner, could directly intervene in health and social care . Fourth, network members' resources can reinforce an individual's identification with a group and help maintain subjective social status , which may help promote mental health. Social capital is unequally distributed within societies, by age, gender and health status . Furthermore, people experiencing long-term mental health problems or short-term psychological distress have access to less social capital than the general population . Inequalities in access to social capital arise through social network attrition associated with the 'pushing and pulling' away of friends and family members during the onset of severe mental health problems . Impaired social functioning due to symptoms of mental health problems , social rejection and discrimination , and strategies people employ to cope with stigma have all been shown to reduce network size. Additionally, an inverse correlation has been found between access to social capital and experiences of discrimination . Although some social care workers help people build relationships and strengthen their connections with their local community , this is afforded a low priority by many . There is good evidence that positive and supportive social relationships are associated with well-being and that practitioners should aim to enhance individuals' networks to provide this . However, there is limited evidence about social interventions, which assist people with mental health problems to enhance their social networks . Informed by the Medical Research Council guidance on developing and evaluating complex interventions, this study is the first stage in the development of an intervention to enhance social networks . This study aimed to understand how practitioners help people recovering from psychosis to develop their social networks. In particular, we aimed to investigate how workers created new opportunities for social engagement; discussed individuals' concerns about creating and maintaining social relationships; supported the development of social relationships with resourceful people ; and supported the process of investing in, and utilising, social capital held within networks. --- Method --- Design and setting We conducted a qualitative study of practice in six health and social care agencies in England using combinative ethnographic methods . This involved collecting data on a range of social network enhancement activities in six diverse contexts to allow us to generate general principles of practice through comparisons of individual experiences: • Agency A was an inner-city third sector project, which connected people with mental health problems with others with shared interests; • Agency B was an inner-city social enterprise, which helped young people with psychosis to develop new relationships through playing sport; • Agency C was a National Health Service early intervention in psychosis service in a mixed catchment area of countryside and large towns; • Agency D was a large third sector housing support agency with many services across both inner and outer city locations; • Agency E was an inner-city NHS social inclusion team supporting the recovery of people with psychosis; • Agency F was an NHS early intervention in psycho- sis service covering a large rural area. Agencies were selected purposively to provide diverse practice contexts, which is an important consideration in social capital research . They all appeared to be engaged in social network enhancement activities on the basis of prior discussions with practitioners and managers. Agencies were selected from the NHS, voluntary and third sectors in rural and urban areas because it was anticipated that the context for practice played a significant role in shaping the relationships between practitioners and service users, and influencing the ability of workers to support individuals with their social networks. --- Sample Sample selection was guided by social capital theory . We provided managers from each agency with a briefing paper containing information about the study, the definition of social capital used in the study and case studies which appeared to exemplify social network development. They initially identified a small number of workers who they considered to be adept in the practice domains we aimed to explore. As familiarisation with the agencies and workers grew, the researcher recruited subsequent workers to participate in the study. The inclusion criteria were any worker in the participating teams or agencies who appeared skilled in connecting service users with other people. Managers did not place any restrictions on access to practitioners, which enabled us to purposively select workers in both data collection phases until data saturation was achieved in each of the practice areas. As researchers led the selection of participants, there was very little self-selection of practitioners to the study. We recruited a sample of people recovering from an episode of psychosis who were in their first or second engagement with mental health services to participate in the study. The only selection criterion we used was for participants to be receiving a service from the agency. Participants were largely between 16 and 35 years of age, as this was a typical inclusion criterion for early intervention in psychosis services , but the sample was not restricted by age or diagnosis. In the social inclusion projects , we also included people with other mental health problems, and those above the age of 35, to obtain their experience of innovative practice that may not have been captured elsewhere in the study. Initially, staff members gave information about the study to service users who may have been interested in participating. However, as the researchers became more familiar with the services and their users, they were able to purposively select participants to obtain a diversity of experiences and perspectives on the research questions. Staff members allowed researchers full access to service users and only played a limited gatekeeping role in the NHS teams where they advised on who may lack capacity to participate because of experiencing an acute episode of psychosis. Although it is possible that staff may have influenced sample selection, researchers were able to mitigate any bias this may have caused by recruiting directly from among service users. Sample sizes were guided by theoretical saturation across the practice domains of the study. Analysis was iterative and ongoing throughout the study, and we continued to recruit both workers and service users until themes were saturated and no new themes emerged in subsequent data collection. Recruitment was spread across the participating agencies in both data collection phases so that the samples were not disproportionately drawn from one agency. Although no target sample sizes were set, we ensured that participants broadly reflected the socio-demographic characteristics of workers and service users in the agencies they were recruited from. --- Data collection The field work was conducted from November 2010 to March 2012 in two phases using a sequential iteration method by which the researcher spent time within different teams, sites and settings in distinct phases of the data collection process. Ethnographic field methods of semi-structured interviews, unstructured interviews, non-participant observation, participant observation and informal discussions were used, with participants providing informed consent at each stage. Information sheets about the study were provided to practitioners and service users, with easyread versions provided to participants with limited ability to understand written English. Potential participants were provided with an opportunity to ask researchers questions about the study before giving their consent to participate. Participants' consent was sought for participation in each discrete element of the study. For example, if participants provided data in an interview, continued consent was not assumed and was separately obtained for involvement in subsequent observations. When the researcher participated in or observed groups or meetings, all service users and staff received prior notification that the researcher would like to use the opportunity for data collection and were asked for their informed consent for the researcher to participate. On occasions when staff or service users were not comfortable with a researcher observing a group or meeting, the observation did not go ahead to minimise disruption. The first phase of the fieldwork began with the researcher interviewing workers using a semi-structured schedule to explore how they supported people to develop their social networks. Workers were asked to describe how they developed working relationships with service users, what they understood by resourcefulness and what they did to connect service users with resourceful people. Additionally, they were asked to provide examples from their recent practice of when they had successfully supported an individual to develop new, or maintain existing, social connections, and to discuss the barriers both they and service users faced during this process. Interviews typically lasted between 30 and 90 minutes. This was followed by observations of their practice to explore the extent to which their practice related to their prior descriptions. This involved becoming part of groups and undertaking activities , and shadowing workers observing their interactions with individuals . Researchers observed the 'soft skills' involved in nurturing social connections; the activities which workers used as the context for connecting people; and the support provided by workers to maintain existing relationships. Participant observations typically occurred within an agency or another community location where groups were facilitated. Non-participant observations typically occurred in service users' homes, within an agency meeting room or in a community location, according to where the worker and service user met. Observations ranged in duration from brief encounters of less than 30 minutes to spending a whole day with a worker or a group. As the focus of observations was on how workers developed inter-personal relationships between individuals or within groups, the intensity of the observation varied according to its context. The researcher developed strong relationships with the agencies and workers to permit access to multiple opportunities for observations, which allowed us to experience practice in as many situations as possible. This included regular groups based on arts and crafts, peer support or sport, for example, and one-off events such as concerts or exhibitions in which service users participated alongside other members of the local community. These strong relationships also allowed the triangulation of data from multiple sources, enhancing the reliability of our findings. The observations involved service users who met the inclusion criteria for the study, with their informed consent. Following the observation, each service user was invited to participate in a brief semistructured interview to explore their experiences of this practice. Service users were asked to discuss their perceptions of the effectiveness of their worker in supporting them with their social connections and the barriers they faced in connecting with other people. Interviews typically lasted between 30 and 60 minutes. The sequence of worker interview ? observation ? service user interview was not strictly followed to permit the pragmatic inclusion of data collection opportunities as they arose. During each 6-month phase of data collection, researchers spent time within agencies to understand the context of workers' practice and the structural constraints and enablers of social connections. This varied from a day a week to a whole week at a time, depending on opportunities for data collection. Field notes were made throughout this process, and informal conversations were documented to record interactions and ways of working that may be important for social network enhancement interventions. This process enabled researchers to develop a 'thick description' of workers' practice in their respective contexts. A second 6-month data collection phase was conducted following analysis of phase 1 data to ensure that data saturation was achieved. Phase 2 data collection focused on confirming emergent themes from phase 1; answering research questions which phase 1 data collection did not fully address; and obtaining a more in-depth understanding of the context of each agency. Similar data collection methods were used, but theories about intervention development from phase 1 were taken into the phase 2 field work to avoid repetition and allowed us to work towards data saturation. Additionally, individuals and workers with different points of view were sought to ensure that diverse opinions were represented. While phase 2 predominantly included new workers and service users, we interviewed eight workers and two service users who participated in phase 1 approximately 6 months following their first interview. This semi-structured interview focused on how the worker's practice had influenced the service user's ability to make and maintain relationships, and enhance their access to social capital. --- Data analysis Interviews were audio-recorded and transcribed in full for analysis. Contemporaneous field notes were made by the researcher throughout the data collection phases, particularly the participant and nonparticipant observations, to capture practice contexts. Data were analysed as an iterative process throughout data collection using the constant comparative method in grounded theory . Data were analysed at the individual level of service users and workers as the study aimed to explore individual practice. We explored the perspectives of worker-service user dyads where possible, although this did not extend to a full social network approach to analysis. NVivo v.10 was used to assist tasks of coding, retrieving and comparing data. Analysis involved a detailed reading and re-reading of the text to identify initial themes in NVivo, which were refined through comparisons of text subsumed under each thematic category. This initial process was undertaken by two researchers working independently to enhance reliability and minimise the potential for bias. An initial coding framework was developed in NVivo and used as a tool to guide questions within subsequent fieldwork. It was used as a lens through which to observe the agencies and individuals that they worked with. Coding of subsequent field work within NVivo proceeded along two different dimensions. First, topics were assigned to the following themes: 'agency', 'worker', 'individual' or 'practice'. 'Practice' referred to actions taken by workers which appeared to help an individual enhance their social networks. Practice is conceptualised here as the exercise of a profession or what practitioners do in the course of performing their duties. In contrast, 'worker' encompassed the personal qualities of a worker, or examples of practice that were specific to one worker in particular. 'Agency' referred to the organisation as a whole, as well as other organisations that participants had experienced in the past. The 'individual' domain included text about the users of the service. Text referring to more than one theme was coded multiple times. As the focus of this study was on professional practice, we did not separately code the social processes of forming and maintaining social relationships which, although important, required a separate analysis. Second, a grounded approach was used to code data by subject in more detail . Data were analysed sequentially from each agency involved in the study, first from the perspective of workers and then from each agency's service users. At this point, the coding framework in NVivo was refined to combine similar nodes, reducing the node count from over 100 codes to 13 overarching themes. Triangulation was achieved in the analysis through comparison of worker and service user perspectives within each agency, and between the two phases of data collection. Data from phase 2 interviews with the 10 people who had been interviewed in phase 1 were integrated into the analysis to explore the outcomes of workers' practice and triangulate findings between data collection phases. Finally, six transcripts were picked at random and coded according to this new coding framework in NVivo, to ensure that no themes had been missed. No new themes emerged from the data, so it was assumed that saturation had been reached. The analysis was largely conducted by the field worker, although the principal investigator was involved throughout in discussing coding structures and thematic categories to further enhance reliability. Ethical approval for the study was provided by the NW London NHS Research Ethics Committee 2 . --- Results In total, 73 workers and volunteers participated in the study . The gender, ethnicity and occupational profiles of the sample were typical of the mental health workforce in England , although social care staff were overrepresented because of the social interventions being studied. Also, there were a higher proportion of people aged under 40 in the sample, possibly because those working in early intervention in psychosis services are typically younger . A total of 51 service users participated in the study . The predominance of men, younger age groups and an overrepresentation of black ethnic groups in our sample was typical of the epidemiology of psychosis in England and the target population for this study. The presentation of findings starts with the skills and attitudes of the worker, which appear necessary for social network enhancement activities, and is followed by an examination of the processes associated with connecting people. The role of the agency is considered separately as it provides the context for the practice explored in this paper. Finally, the barriers to social network development are explored. Findings are presented as the shared perspectives and experiences of service users and workers, although different opinions are highlighted where relevant. --- Worker skills, attitudes and roles The attitude of the worker towards social network enhancement appeared central to the likely success of interventions. As in models of recovery wherein practitioners can play a key role in fostering hope , workers with a 'can-do attitude' appear to be more effective. This attribute encompassed the capacity to do things quickly, flexibly, confidently, enthusiastically and to enjoy the job . Relationship building skills appeared to be equally important. These included having a sense of humour and being friendly. Body language and communication skills were also key, as well as patience and time to spend being supportive, reflective, and having selfawareness and insight. A person-centred approach and the centrality of the service user appeared to be important as intervention success appeared to depend on listening and responding to what an individual was interested in and wanted to achieve, rather than being designed by the service. However, positive attitudes occasionally needed to be more actively conveyed to an individual to promote that person's confidence, responsibility, hope and empowerment. An absence of these 'soft skills' in our data was notable, suggesting a potential training need for workers. Although context-dependent, clarity about professional boundaries and roles appeared important, allowing workers to act within them accordingly and not feel too constrained by them. However, a clear difference emerged between the NHS and third sector agencies on this point, with a greater blurring of roles in the latter . More equal relationships between a worker and a service user appeared to be important in supporting someone to develop their network. Respect, a shared sense of identity, honesty and trust, and empathy from shared lived experience appeared important elements of equal relationships. In NHS mental health services, however, where professionals frequently held considerable power, including the means to compulsorily detain individuals in hospital, this was often problematic. --- Processes involved in connecting people The exposure of a service user to new ideas appeared to be a key element in the process of identifying opportunities for connecting people and developing social networks. To do this effectively, a worker needed to be continuously thinking about potential opportunities as a component of everyday practice, and actively identifying opportunities when they arose. For a service user to be more likely to engage with these new ideas, it 'Even if the situation might be quite dire and quite difficult, it's to always find something positive or something that they can aim for, you can aim for together, to give somebody hope'. appeared to help if a worker shared their personal experiences and resources . Once a relationship had been formed and new ideas discussed, the worker and service user set goals together. Successful goals appeared to be tangible and realistic, articulated in clear steps which did not overwhelm the individual. The creation of new networks and relationships in the course of attaining these goals provided the context for the creation of social capital. Workers assisted by introducing service users to new people with similar interests either within the agency or to resourceful people outside the agency. Both service users and workers identified that workers needed to develop new contacts to help facilitate this process. Workers identified new contacts by word of mouth, as well as networking both within and outside their organisation. Engagement with local communities appeared to be at the heart of this process. This active process corresponded with the less tangible, attitudinal element of finding out about new ideas during their daily work from colleagues and their networks. When fostering and nurturing enthusiasm for the process appeared problematic, engaging in activities appeared to be an effective approach to connecting people . The activity was either provided by the agency or was one the worker and the service user decided upon together. However, the motivation to attend an activity, group or scheme was important, as was the self-awareness and existing knowledge of a service user, which appeared to increase self-confidence. Building a service user's skills and providing them with the opportunity to use or share them appeared effective tools in connecting people. Additionally, practical support from the workersuch as help with a CV or job applications, or with managing personal financeswas perceived by service users as being important in helping them to achieve their goals. Attending activities or interviews together and introducing a service user to a new environment were also seen as potentially useful as it gave them confidence to try new things. By gaining new skills and confidence facilitated by workers and the agency, and taking responsibility for working towards their goals, service users who tried new activities formed new social ties in their local community or community of interest. Graded exposure techniques were used quite frequently. Service users who were provided with flexible ongoing support reported feeling more secure than if they were left to attend new activities alone, particularly those who lacked in confidence or were fearful of discrimination because of their mental health problem. --- Role of the agency The extent to which the agency engaged with its local community appeared to influence its ability to develop service users' personal networks. Health and social care services facilitated bonding social capital by linking homogeneous individuals in shared activities, particularly when they provided a nurturing, friendly environment, which did not feel too 'clinical'. Arguably, more importantly, however, they supported the formation of bridging social capital by introducing individuals to training, employment or other opportunities, which connected them to heterogeneous others. Additionally, strong connections within an agency appeared to be associated with the sharing of information and fostering of external connections . Third sector agencies embedded within local communities which used non-stigmatised locations appeared more successful in facilitating social connections, although this needs to be empirically verified. Many participants discussed, or were observed developing, relationships with other users of the service they were receiving. Friendships and relationships formed naturally and we observed only a few instances when workers intervened to support the development of an individual's social skills to facilitate this process. Teamwork, social networking and undertaking shared activities based on shared experiences aided the formation of relationships. Some individuals found that these 'safe' interactions within the agency helped their confidence in forming other relationships externally. However, several service users described the agency as a place which they went to for its facilities and to see their worker, rather than as a place to make friends. Although we observed the development of social connections within agencies, this paper did not specifically explore the role of peer --- Barriers to network development Barriers to developing social networks appeared prominent throughout the fieldwork and could be categorised as attitudinal or contextual. Stigma of mental health problems and negative attitudes of others were prevalent barriers. Contextual barriers appeared to be shared by both workers and service users, mostly characterised by a lack of resources such as money, transport, knowledge, time or support. Barriers were not perceived as insurmountable by the more positive workers, and service users who acknowledged them explored ways around them and sought to overcome them. Although barriers prevented individuals moving continuously forward in a linear journey, many achieved their goals by other means or took longer than originally planned. --- Discussion This is the first study to explore the processes involved in health and social care agencies which are supporting people recovering from an episode of acute mental illness to develop their social networks, although its findings mirror ethnographic studies of similar contexts . Our findings suggest that shifting the focus of clinicians away from deficits in the social functioning of people with psychosis to identifying assets and shared interests among them encourages social engagement. This study supports Perry's findings that changes in social environments impact the social networks of people with severe mental health problems. It appears that providing meaningful opportunities for isolated people to meet others who share their interests can support the development of their social networks and improve their access to social capital. This study is limited to the experience of practice within six agencies. It is possible that other agencies were working in other potentially more effective ways, but we were unable to capture this within this study. It is also possible that our observations and interview questions were unduly shaped by our preconceptions of what social network enhancement practice might look like. The initial nomination of workers by managers may have skewed the sample, but the inclusion of a large number of people in this qualitative study helped to counter sampling bias. However, the possibility remains that some perspectives were not adequately captured. For example, a larger number of workers participated than service users, which may have biased our findings towards the perspectives of the former. People who identified themselves as carers or family members of the service users were largely absent from the sample. This was not intentional, but was possibly caused by our recruitment strategy, which focused on agencies and not families. Also, many of the study participants did not have carers or family members involved in their care or support, although the importance of family members was emphasised by those who did. Our approach to data collection provided us with simultaneous insights into multiple practice contexts, which enabled us to test emerging theories in different sites. However, the study's focus on a small number of agencies and its iterative nature possibly made it difficult to directly compare practice between the participating agencies. A survey approach would have obtained data from a larger number of sites facilitating comparison between them, although it would have failed to capture rich contextual data. A further limitation of our methodology was a bias towards activities which may help generate social capital rather than those which may assist in its mobilisation. It is possible that this was a result of a focus on the development of social connections rather than utilising these social connections, as workers appear more familiar with the former than the latter. Additionally, as the focus of this study was on professional practice, we did not separately analyse the social processes of forming and maintaining social relationships which, although important, require separate consideration. Agency A showed just how well connected a group of workers could be. The relatively small number of staff had a near-encyclopaedic knowledge of resources within the local community, achieved very simply by asking, enquiring, and talking to people, by inviting people from the 'outside world' in for tea and to use the centre. All of this prevented the agency from fostering a 'them and us' mentality. This study found that agencies with stronger and more numerous connections with other community projects and networks appeared better able to connect service users with local opportunities. However, inter-agency working was not a significant focus of this study and further research is required to investigate its effectiveness in supporting the development of service users' networks. --- Conclusions While data in this study cannot be generalised to the whole population of people recovering from an episode of psychosis, our sample of workers and service users recruited from multiple contexts has helped us to identify practice components which appear effective. The modelling of these components will assist us to develop an intervention framework, which can be used to support workers in improving outcomes for service users. Intervention modelling will also permit replication of good practice and provide the basis for subsequent evaluation to help develop an evidence base for this neglected aspect of health and social care practice .
held within networks, is recognised as being important to mental health. • People with mental health problems have access to less social capital than the general population. • There are no practice models or frameworks to assist practitioners to help people develop their social networks and enhance their access to social capital.• Health and social care agencies can help people enhance their access to social capital via social networks. • Workers can help service users enhance their social connections by supporting them to engage in new activities within their communities. • A person-centred approach which builds on service users' strengths appears to be most effective.
around 140,000, as a result of direct and indirect effects.1 On the fourth anniversary of the bombing, the architect Tange Kenzo ) was appointed to carry out his plan for the Hiroshima Peace Memorial Parkl in the Nakajima district, close to the explosion's hypocentre. Dedicated to the memories of the victims and survivors of the bomb, this park -home to iconic landmarks, such as the ruins of the A-bomb Dome or the Hiroshima Peace Memorial Museum-has become a universal symbol of peace for which the city is known today, drawing over a million visitors each year. Despite this, when Tange asked himself "what crosses people's minds when they stand in the park?" he answered that "it might vary from individual to individual" . 75 years later Tange's question lives on, especially as fears of a new nuclear escalation loom once again. This article presents a detailed case study of experiencing and interpreting the Hiroshima memorial. Based on a previous line of research , our fieldwork at HPMP sets out to elaborate Tange's answer by asking: How does the HPMP's design feed into the way in which people experience the site? What elements stand out and why? What meaning-making processes do these elements elicit during the visit? These are relevant questions after 75 years marked by an overly idealistic symbolism of peace that has rendered a monolithic character for the city of Hiroshima, thus silencing a number of controversies and contradictions about the way the a-bomb has been officially remembered . Whereas previous research on memorials has mainly focused either on architectural features or observations of people on-site , our focus is on the situated, ongoing interaction between individuals and memorial sites from the participant's own perspective. In line with new methodological approaches, such as the sensory and video ethnography , we are interested in studying the visitors' contextualized meanings and feelings at these sites, including their atmospheric qualities . To this end, we propose an innovative methodology based on the use of subjective cameras , particularly tailored to study the possibilities offered by different memorials . In what follows, we first outline some key features of Hiroshima Peace Memorial Park , controversies surrounding it and our theoretical framework for understanding visitor's meaning making. Second, we provide an in-depth case study of a person's trajectory of experience through the memorial. This analysis will focus on visual perception of borders and intersections on the one hand, and conceptual distinctions on the other. --- Politics of the Hiroshima Peace Memorial Park: A Brief History of Ambivalence Memorial sites involve a process of symbol formation aimed at commemorating collective events, including loss and trauma . Conceived as spaces of shared memory, memorials provide a physical site to express and emotionally connect with the collective loss, which helps individuals and societies to reinterpret the past, and in so doing construct new orientations to the future . As cultural artefacts, memorials have undergone multiple changes throughout history. Traumatic events in the 20th century -such as the two World Wars, Auschwitz, or Hiroshima and Nagasaki-disrupted the functionality of traditional memorials, typically characterised by a vertical and an affirmative style that features conventional symbols and figurative representations of heroes and martyrs. The forms in which collective loss had been so far represented and socially remembered were called into question and became insufficient to memorialize events felt to be unrepresentable and unthinkable . In the absence of a narrative capable of conferring a clear and specific meaning to the past, memorials in the second half of the last century increasingly turned to an abstract, non-representational and non-figurative style. This 'counter' memorial form tends to invite people to actively search for their own meaning to the site . Bull & Hansen associate the abstractness of counter memorials -such as the famous Maya Lin's Vietnam Veterans Memorial in Washington D.C.-with what they call the cosmopolitan mode of remembering, as opposed to the antagonistic mode of remembering that is typical of traditional memorials. Whereas the latter tends to represent the past in terms of moral categories -'good' vs. 'evil' -applied to specific groups, the abstract style of the former, "with its emphasis on the unknowability the unspeakability of traumatic events" , aims to transcend historical particularism with its focus on human suffering. As such, moral categories no longer refer to concrete groups but to universal values, such as peace vs. war. However, according to these authors, the victim-centred approach of counter memorials tends to hide the memory politics of the commemorated event under a seeming social consensus. In the case of HPMP, the promotion of peace as universal value, and as a response to the global threat to human civilization posed by nuclear weapons, is meant to transcend political and geographical frontiers. This position, referred to as nuclear universalism by Yoneyama , implies, according to this author, remembering Hiroshima's bombing "from the transcendent and anonymous position of humanity" . This mode of remembering is eloquently illustrated by the controversy over the epitaph etched on the Cenotaph situated at the heart of the HPMP. Covered by an arch-shaped monument representing a shelter for the victims' souls, the official English translation reads, "Let all the souls here rest in peace, for we shall not repeat the evil". However, in the original Japanese, the second sentence lacks a grammatical subject, thus leaving responsibility for the evil to a subject-less humanity . This controversy around the epitaph shows how, although the original purpose behind the HPMP was to create a consensual urban space around the notion of peace, 1 3 "it eventually became a site for conflict, an ambivalent place" . As Yoneyama points out, this ambivalence can be found behind the tendency to conflate two seemingly opposing signs, "A-bomb" and "Peace"-a tendency apparently encouraged by the Allied, then occupying forces in Japan. Thus, "the park is named Peace Memorial Park, rather than Atom Bomb Memorial Park" . According to this author, the notion of peace in this context mainly refers to post-war recovery, a period associated with a bright future of progress or, as Schäfer puts it, a period "defined in accordance to what it was not" , that is, in opposition with a past of war and destruction. In this sense, in Schäfer's words , "probably no war memory reflects the dualism of present and past, peace and war so fiercely as the memories of the first atomic bombing -Hiroshima, 6 August 1945" . Another example of this dualism and ambivalent tension can be found in Hiroshima's post-war image of peace-loving and victimized city vis-à-vis its prior role as a flourishing military centre in Imperial Japan, and more particularly, in its contribution to the war effort during the colonial expansion in Asia and World War II. The tension between contrasting opposites that cuts through the A-bomb's memory politics materially translates into the spatial politics of Hiroshima's urban renewal. According to Yoneyama , this dualism is mainly present in the way "different urban topographies […] are defined by dissonant temporalities" , whereby the city's dark past of war and death associated to the HPMP stands in stark contrast with a bright, cheerful and weightlessness celebration of progress. As the author goes on to say, "a large part of the production of Hiroshima's "bright" new memory-scape involved the clearing away of physical reminders of the war and atomic destruction" . This generated a debate over the fate of the ruins and those architectural remains that withstood the A-bomb, a debate on whether to demolish them in the interest of the city economic recovery or to preserve them without any utilitarian function as relics of a painful past. Unlike buildings such as the Nippon Bank Building or the Red Cross Hospital, rehabilitated after the war, the iconic Atom Bomb Dome became a musealised object, which stands today as a material example of the duality that looms behind the memories of the a-bomb in Hiroshima. Visible from the memorial's Cenotaph and with a central role in Tange's design for the HPMP, the skeletal remains of the former Industry Promotion Hall stand in stark contrast to the modern city background. However, in addition to bearing a fragment of a past fraught with death and destruction -which contrasts with Hiroshima's post-war rebirth-the ruins of the A-Bomb Dome also bear witness to the city's pre-war times by showing a fragment of "a quintessential sign of Japan's early-twentieth-century imperial modernity" . Tensions between striving for a future recovery and the unbearable weight of the past are always present in debates around memorials or those remains left after collective traumas , just as memory politics are inevitably tied to the dialectics of remembering and forgetting . In the case of the A-bomb memories, the apparent consensus behind the HPMP as a universal symbol of peace seems to obscure certain underlying tension in what Yoneyama deems an effort at taming the city's memory-scape or repressing some of its painful memories . These tensions looming behind the representation of A-bomb memories in HPMP -tensions between past and future, war and peace, death and life, creation and destruction, perpetrators and victims, memory and forgetting-can be understood through the notion of themata. Themata are mutually interdependent antinomies that have been thematised through history . For instance, the oppositional pair of yin/yang is used to account for opposing and interdependent forces in Chinese cosmology. The concept of themata originally comes from the philosophy of science, where Holton used it to look at the basic distinctions, such as continuity/ discontinuity, out of which scientific theories are constructed -for example, contemporary theories of atoms draw on the same themata that the pre-Socratic philosopher Democritus used in Ancient Greece. From the framework of social representations theory, themata help to understand the dialogical dynamics of common sense thinking and its embeddedness in history . Thus, due to a crisis or an unexpected event, some implicit dichotomies in our common sense become themata by being problematised and exposed to social attention and public debate . For instance, the notions of justice/injustice are dialogically discussed and reconstructed in the context of the Israeli-Palestinian conflict , just as concepts of life/death are within organ donation and transplantation debates to mention but a few examples. This approach can be applied to understand the tensions that characterise the way in which the A-bomb has been socially remembered, debated, and represented in post-war Hiroshima. Following Moscovici , we could say that themata act as conceptual coat hangers that provide socially generated ways of understanding the A-bomb and its memory. In this way, individuals may go from implicitly using these antinomies -thus embracing themata in their discourse without being fully aware -to problematise and reflect on them, thereby expressing their "effort to understand and appropriate meaning" . As we will show in the study that follows, experiencing HPMP will give rise to an interpretation of the site in terms of a constant tension between opposites, thus exposing some of the tensions behind the monolithic idea of peace this memorial officially represents. --- Methodological Approach Studying how people experience memorials requires going from traditional monomodal approaches -based on verbal data detached from contextualised activity-to processual approaches capable of capturing individuals' multi-modal forms of experience and meaning-making, as part of a wider set of movements and interactions in space. One of the most recent additions to this area has been the use of subjective cameras , which record individuals' ongoing experience from a first-person perspective, in both video and audio . Subcams, in combination with interviews, offer one of the most contextualised, socio-material, holistic, multisensory, and process-focused data collection devices currently available. In the particular context of the fieldwork at HPMP, the use of the subjective camera was combined with a post-visit interview in which the subcam recorded footage was utilized as a video-elicitation tool in line with other video ethnographic approaches . --- 3 The fieldwork was conducted on 3 December 2021 on the occasion of a research stay by the first author at Kyushu University, hosted by Prof. Minami Hirofumi and funded by the Japan Society for the Promotion of Science.2 Drawing on previous works by the first and the third authors, the study was planned in collaboration with Prof. Minami, with the second author -student of the master's degree in Kansei Science at the School of Integrated Frontier Sciences, Kyushu University-assuming the role of single participant and interviewee and the first author the role of interviewer. At the time of the study, the second author was completing his master's thesis under Prof. Minami's supervision on the topic of Genius Loci -i.e., the spirit of a place according to the ancient Roman tradition. Due to the common interest of the first and second authors in studying the atmosphere of places, Prof. Minami suggested the latter as a possible participant in the study. 3 The only instructions given to the second author were to walk freely through the memorial alone with the subcam, thus giving total autonomy to experience the place. For instance, he was also free to interact with other visitors, which the subcam would have audio registered, although he did not talk to anyone during his visit. After the visit, the resulting video recordings were replayed back to the SA in a post-visit interview conducted by the first author at an off-site location. While watching the subcam video of their visit, the SA was able to comment on the experience by reflecting on his affective engagement with the environment and the meanings and associations afforded by some of its elements. As became evident in our previous fieldwork conducted at the Memorial of the Murdered Jews of Europe in Berlin , the post-visit interview is a necessary complement to subcam data because not everything in a person's visual field is registered or actively attended to as a meaningful component of their experience . Furthermore, in human perception what becomes a focal point of experience is often symbolically elaborated in reference to the cultural world to which the person belongs as well as their personal history . This signals a shift from the direct to indirect perception of the environment , expressed in the participant's associations and reflections about the site during the interview. The post-walk interview was transcribed and thematically coded by the FA. The themes associated with different strings of antinomies regarding the HPMP quickly emerged as key in the SA's experience of the memorial. For each quotation regarding each antinomy, we identified the corresponding moment in the subcam video and took a screenshot of it. Screenshots from the subcam video will be shown to illustrate features of memorials from the SA's standpoint, and thereby highlight contextual and experiential qualities, powerfully captured through visual methods. Both when planning the research and after the data collection at HPMP, specific ethical challenges posed by wearable cameras were discussed and addressed, particularly those involving minimising the scale and scope of data featuring third parties -and promoting participant's control over visual data, including access to or withdrawal of the data as part of the ongoing process of consent. Furthermore, in line with what Sumartojo & Pink highlight in their studies, the fact that the FA also visited the memorial, provided him with a basis upon which to empathetically discuss the experience with the SA. --- A Case Study of Visitor Experience in Four Phases On December 3, 2021, the FA and the SA -accompanied by prof. Minami Hirofumi from Kyushu University-took a Shinkansen from Hakata train station in Fukuoka bound to Hiroshima. After checking in at a hotel nearby, we took the Peace Boulevard to start the visit to the HPMP from the southern side. This decision was advised by Professor Minami, as both the FA and SA had already visited the memorial some years ago starting from the north side, where the A-bomb dome is located. Once we crossed the bridge over the Motoyasu river, the SA began his visit alone equipped with the subcam glasses. The analysis that follows focuses on three moments of the visit around which the subsequent post-visit interview revolved: the memorial entrance, the Cenotaph and the A-Dome. In subsection we include the analysis of the SA's visit to the Hiroshima National Peace Memorial Hall for the Atomic Bomb Victims, which took place the following day without the subcam glasses. 1) Entering the memorial: a transition from the mundane to the sacred. When the FA and the SA parted ways upon crossing the bridge, the latter initiated the visit by deciding not to go straight to the memorial's main entrance -where the Gate of the Peace is. He took, instead, the path along the riverside bordering the memorial . While watching the subcam video featuring the tree-lined path along the river, the SA explains his decision of delaying the encounter of the memorial's central area: "Coming to the park from this way, mmm … it is quiet at first. So… your emotions have time to attune to the site, instead of going directly to the memorial's centre. In this way you are more in control… you do the visit at your own pace". He then goes on to compare entering the memorial with entering a sacred space for which one needs to go through a series of rituals to purify oneself in order to be prepared both mentally and emotionally: "When you go to a Japanese shrine you have to wash your hands […] It is like going from everyday life world to a religious or sacred area". In her Purity and Danger, Douglas argues that purification rites signal a border. Uncleanliness is a relative notion: Shoes on a clean carpet are dirty 1 3 but not outside. The ritual of taking of shoes to enter the house is indicative of moving through a border. As we can observe at this initial stage of the visit, the location of the memorial at the crossroads of two rivers,4 creates a kind of border zone separating the city's everyday life from a different area encapsulated within the HPMP. As a result, the path along the river is perceived as an intersection point between these two spaces, as a liminal zone in which to perform certain rites of passage to transit between two worlds charged with different spiritual and affective atmospheres. The perception of this border zone -which points to the city's different "urban topographies" as noted by Yoneyama -marks an inside and an outside, thereby prompting the SA to interpret his experience through the antinomy sacred/mundane, which confers a value distinction between the inside and the outside . However, this value distinction between the inside of the HPMP's area and the outside is reversed with the use of another pair of opposites which comes up immediately after in the interview. These antinomies emerge when the SA explains why he did not want to turn his gaze to the ruins of the A-Dome building -whose presence can be sensed in the background of the image-while walking along the river . In the words that follow, we can see a clear opposition between war, as a man-made artificial thing -represented by the image of the A-bomb Dome building- A few minutes later, the video shows the SA stopping on the way and turning in the direction of the central area of the memorial. In watching these images, he comments: "Here I thought it was time to put the real thing in front of me". 2) 'Like a time machine': between past and future, death and life. While watching the video of himself heading towards the area where the Peace Memorial Museum is located, the SA comments on the sense of anticipation he was feeling at the upcoming appearance of the Cenotaph in the central part of the HPMP. This sense of expectation points to the emergent and flowing nature of atmospheres as a sensory quality of experience. As Sumartojo & Pink point out, the emergence of atmospheres "entails both a mode of experiencing the present moment, and anticipatory mode relating to what might come next and the feelings that this might involve" . In the SA's own words: "I know that I will see the monument and the A-dome very soon, so I am very excited". Some seconds later, he describes his first reaction upon seeing the Cenotaph and the ruins of the A-dome looming in the background of the image : "When I turn right, at this moment I see the The video shows how he stands contemplating the Cenotaph with the A-Dome in the background from below the Peace Memorial Museum for almost two minutes . While watching this perspective on the screen, the SA highlights a sense of convergence between the trees and the Cenotaph pointing towards the A-Dome. He says: "This moment is the most touchable one, when I see the combination of the monument and the dome". As we will see, this convergence will make the Cenotaph to be perceived as an intersection point between different opposing meanings. This connection with the site is further reinforced by the happy atmosphere he experiences in the site due to the presence of people enjoying the park and children offering flowers at the Cenotaph, something the SA associates with life, hope and future. In his own words: "The contrast between the history of 70 years ago and the presence of the children at this moment is striking […] You see a lot of children here offering flowers to pray for that people [who died]. I feel it is a nice moment because history and children make a very interesting combination. Children are a symbol of life, and, on the other side, you have the dead people. […] I also see a lot of people walking their dog or just sitting and talking with friends even though this is a site with a very sad history. But still, people use this space to chat. This creates a happy atmosphere". This fragment shows how people co-create atmospheres through their actions . In this case, we can see how the atmosphere co-created by children offering flowers at the Cenotaph leads the SA to interpret the site through a string of connected antinomies opposing life -symbolically represented by children-to the dead people remembered through the monument, and the place's sad past to the happy atmosphere the SA is experiencing in present. Interestingly, there is a moment in the subcam video where the SA appears taking a picture of the children around the Cenotaph in an attempt at capturing and making sense of that perceived dichotomy between the memorial, representing the dead and the past, and the children, representing life and future. In SA's words: "I wanted to put the children and the monument in the same photo, combining these two elements [children and Cenotaph]". Capturing these antinomies in the picture becomes a meaning-making resource as it enables the scaffolding of different reflections anchored in the SA situated experience . Echoing the social debate around the HPMP alluded to in a previous section, the SA argues: "I don't think the monument should just tell people the sad things, but I think that… the monument should give people the hope for the future". However, this tension between the hope for future and the sadness of the past is reframed and given a personal meaning by turning it into an imaginary intergenerational dialogue. This interpretation seems to be elicited by the very disposition of the arch-shaped Cenotaph framing the A-dome in the background, thereby acting as symbolic and special intersection point between the past and the present, between the victims and the participant. As we can see in the following excerpt: I felt that... if you stand in front of the monument [points to the Cenotaph] and the A-dome, is like you are talking to your grandfather or your grandmother, listening to stories from them. It is like being with your family This imaginary dialogue was further supported by some sensory elements of the memorial, such as the fire of the Peace Flame. Symbolising the sea of fire that the city became after the bomb, the constant movement and regeneration of the flames leads this element to be perceived as one embracing both life and death, somehow signalling the present absence of the A-bomb victims. In watching the subcam video showing the Peace Flame framed by the Cenotaph, the SA comments: Especially when I see the fire, which is constantly moving… I felt like something was alive. So, I imagined there were thousands of people just right there telling something to me, as if they were my grandpa or my grandma […] I got very moved by it Finally, while watching the video featuring the A-Dome at the background framed by the Cenotaph, the SA highlights once more the convergence between the Cenotaph and the A-Dome forming a powerful symbolic axis . It is precisely in reexperiencing this axial view through the video that the SA resorts to a metaphorisation of the place . In the next excerpt we can see how the analogy between the place and a time machine summarises, to some extent, the SA's experience of the memorial as a site that affords transiting between opposites, between the present and the past, between life and death. Here, it important to highlight the situated emergence of this metaphor triggered by the memorial layout and the movements of the SA in the memorial space. The very position and perspective from which the SA is looking at the A-Dome through the arch-shaped form of the Cenotaph seems to create an intersection point affording a visual connection and a connection of ideas. The monument [the Cenotaph] is like an arch, so you can see through this monument and look at the dome. From this perspective, I can feel a gap in time; that side [pointing to the background of the image where the A-dome rears its head] is the past and this side [pointing the image featuring the Cenotaph in fore- The interview goes on as the subcam video progresses showing images of the SA walking along the Peace Pond in the direction of the Children's Peace Monument without paying much attention to it. After leaving behind that monument, he makes his way towards the twisted Peace Clock Tower which marks the exact time when the bomb was dropped on August 6, thereby bringing the tension between the present and past -made present through the hands of the clock-to the fore once again. From there, the video shows the SA crossing the Aioi Bridge 5 with the A-Dome making its appearance on screen . Watching the sunset light casting on the A-Dome, the SA refers to the sacred atmosphere conveyed by these ruins, thus picking up again the duality between the sacred meaning of the HPMP and the mundane life outside the memorial. To this distinction the SA adds the stark contrast generated by "the building in ruins surrounded by more modern architecture". This is a contrast -between the old and the new, between the city's past and its modern present-that, in the words of the SA, makes the A-Dome to "stand out more from its surroundings". Furthermore, the objectification of the past through the A-Dome's ruins creates a particular atmosphere that feeds into the SA's imagination when he begins to talk about the people working in the former Industry Promotion Hall: "The people in the building back in 1945 maybe they were busy 5 Taken as the aiming point for the A-bomb due to its easily recognisable T-shaped structure, the bridge manged to stand after the explosion and, following some repair works, it remained in service until being replaced by a new replica in 1983. going about their business and had no time to scape and then vanished in one second". As Beckstead points out, the past evoked by the ruins affords individuals "to be affectively drawn into the setting in the here-and-now and to go beyond it and to imagine what life was like for those who lived, worked, loved in, around, or near the build that has become a ruin" . Drawing on Niels Bohr's observations during his visit to Hamlet's castle in Kronborg, Denmark, Beckstead stresses the fact that ruins are more than a material object composed of stones, but are places saturated with meaning. This assertion is behind the way in which the SA reflects on the meaning of the A-Dome while watching this building through the subcam video. At first, he remarks that "the building is not as strong as when seen from the Cenotaph perspective. When seen from that perspective you see a monument, but here I… just see a building in its ruins". However, as he watches the recorded images of the ruins in more detail , he starts paying attention to the building's specific features: You can see how the windows are all blown out and the iron has been twisted, so you can feel the strong wind that came after the bomb […] It's like a sculpture shaped by the wind of the A-bomb […] This makes this place to have a very special atmosphere And from that point, he begins to reflect on the complexity of the building's entrails: Looking at the A-Dome from far away it becomes a symbol, but if you come closer, it gains a more concrete meaning, it becomes something more fragmented and complex. It is not just one thing, one symbol, but a lot of things […] Fig. 7 Screenshot from subcam video when the SA began to engage in a more detailed description of the A-Dome's material aspects It acquires a more three-dimensional nature, containing a lot of information: the twisted pillars, the windows blown out, … As we can observe, the evocative power of the ruins invites the SA to engage in a meaning-making process as the interview develops. Guided first by the abstract meaning traditionally ascribed to the A-Dome, as a symbol of peace, a closer look at the building -this time through the recorded images of the subcam video-offers the SA the opportunity to reflect on the richness of its materiality. This case is an illustrative example of schematisation and pleromatisation, two complementary processes mediating aesthetic perception. According to Valsiner , schematisation works by reducing the complexity of experience to abstract categories and symbols, thus leading to meaning fixation in the flow of experience. Conversely, "the homogenising role of language symbols is counter-acted by the heterogenising role of [pleromata]", namely "hyper-rich depictions of reality that stand for some other realities" . Through pleromatisation new meanings can be created beyond the categorization function of language symbols, something that might be emerging in the post-visit interview. Thus, the detailed description of the A-dome seems to contribute to the emergence of other meanings beyond its official meaning as symbol of peace. More specifically, in focusing on certain aspects of its materiality, the building gains complexity as an intersection point between the idea of peace -officially ascribed to the memorial -and the sense of destruction conveyed through the twisted pillars, the windows blown out, etc. 4) Afterward: from the weight of the past to the lightness of the future. The next day, the FA and the SA went to the Hiroshima National Peace Memorial Hall for the Atomic Bomb Victims , designed by Kenzo Tange and founded in 2002. We spent some time at the Hall of Remembrance, situated at the lower floor. This circular-shaped hall features a 360-panoramic picture taken right after the bomb from the hypocentre made of 140,000 tiles representing the estimated number of victims who died by the end of 1945. After the visit, carried out without wearing the subcam glasses, the SA wrote down the following impressions as part of his Master thesis : "Everything was solemn and peaceful, and my steps were heavy. In addition, the entire site was built underground. In my mind's eye I realised how deep the history of this place is buried. There was a tension in the air that contrasted with the atmosphere outside. Seeing the names of the destroyed towns inscribed on the walls gave me a heavy feeling. I was stuck in the underground experience for the rest of the day, even when I walked out of the venue. It was at this point that my view of the city changed again, and I felt as if the entire city had also become heavier. It was then that the sound of a song on the radio, praying for peace, jumped into my ears. I felt the city being reborn, which lifted my spirit at the end of the visit […] So this is a city full of hope, not only full of sad memories. That is what the park transmits to me" . This excerpt sums up some of the aspects pervading SA's experience of the HPMP seen so far. Once more we can see an experience shaped by a constant duality between opposites, between the sad memories of the past and the future's hope epitomised by the voice of children singing. In this case, we can observe how this experience is manifested through different sensory feelings and embodied metaphors afforded in turn by the very design and materiality of the site. Thus, the sad memories about the A-bomb victims are associated with "heavy feelings" experienced while being "underground" visiting the "buried history" of the city. Conversely, the SA's spirit is "lifted" as soon as he reaches the ground level of the park, where he feels the city being reborn, a sensory feeling enhanced by a recorded music that played nearby. 6 Here we can see an experience similar to the inside/outside transition sensed at the beginning of the visit, although this time taking the opposite direction, going from the inside of the Memorial Hall to a reencounter with the city. --- Discussion: Materiality, Movement, and Meaning Departing from the possibilities and constraints afforded by memorials' material and symbolic dimensions, the focus of this single-case study has been to analyse how HPMP is experienced, and more specifically, how different elements of this memorial afford the emergence of affective atmospheres and meaning-making processes as the participant move along and engage with the site. Such a goal could not have been attained exclusively at the level of discourse, through a language-based/mono-modal approach. As Drozdzewski & Birdsall note, mobility is an enabling methodology in that it makes something apprehensible about memorials that is only possible through movement. In that regard, the use of the subcam, in combination with the post-walk playback interview, has allowed the SA to access his multi-sensory and situated experience, while jointly reflecting with the FA on the footage recorded during his visit. As seen in previous studies , this method endows participants with agency when it comes to relate their personal associations, affects and meanings to specific elements of their visit as they watch the visual material recorded at the site. It is worth noting that, unlike other methodologies used in other studies , such as go-along interviews, the post-visit interview implies investigating something that took place in the past -even if it is in the immediate past-, thus giving the participant a new opportunity to reinterpret his experience. In this case, the post-visit interview enabled both the researcher and the participant to dig into the meaning-making process associated to different aspects of the memorial as they were shown in the video. As seen, this meaning-making process is to a large degree structured around a string of antinomies which, paraphrasing Moscovici , served as conceptual coat hangers providing socially generated ways of experiencing and interpreting the site. Importantly, these antinomies emerged during the interview from the viewing of the subcam video recorded during the visit, being therefore linked to the physical experience of the memorial and its subsequent visualisation by the SA. More precisely, these antinomies arose when the video showed images that the SA associated with experiences of transition between border zones or experiences of intersection between elements to which he attributed opposing meanings. This occurred at the four points of the visit on which we have focused our analysis. 1. The memorial entrance: The presence of the river generated a feeling of crossing a border between two opposed atmospheres , thus making the initial stretch of the visit to be perceived as a liminal zone between the two. This appears to have called for a meaning-making process whereby the notion of a sacred place, applied to the memorial, was set against the idea of mundane and everyday life of the city. Being by the river also generated a dichotomy between war, as a man-made artificial thing associated to the memorial, and nature, implicitly associated with peace. 2. The Cenotaph: The presence of children contributed to create a happy atmosphere leading the SA to contrast the idea of life, hope and future to the deaths of the past associated with the Cenotaph. The intersection between these dichotomic elements is stressed and objectified through a picture the SA took in front of the monument. This dichotomy is latter recreated through the axial view of the Cenotaph at the foreground and the A-Dome and expressed through the metaphor of the place as a time machine, thus conveying the sense of intersection between past and present. 3. The A-Dome: The dichotomic experience of time shows again when, once close to the A-Dome, the SA highlights the stark contrast between the old ruins and the modern background of the city. This dichotomy between old and new, is complemented by the notion of sacredness attributed to the A-Dome, thus pointing again to an implicit border between the memorial and its mundane surroundings. Finally, the closer look at the A-Dome contributed to the intersection of two contrasting views on the building seen as symbol of peace and as material evidence of war. 4. The Memorial Hall: The dichotomy between past/future, death/life emerges again with the SA's physical transition from the inside of the Memorial Hall -situated beneath the ground-to the park ground level, where the sound of a recorded music contributed to generate an uplifting feeling from the previous sense of heaviness experienced down in the Hall. Similar to the experience at the memorial entrance, there is the sense of crossing a border separating two worlds endowed with completely different meanings, although this time the transition goes from inside to outside the memorial. As we can see, these dichotomies emerge as a result of the SA's meaning-making effort at his encounter with different situations during his visit, particularly those leading him to perceive the site in terms of border zones or points of intersection. However, following De Paola et al. , we should be cautious in regarding all antinomies found in our analysis as themata. As Liu points out, we should differentiate between themata -referred to those "historically embedded presupposi-1 3 tions, culturally shared antinomies, and the deeper logic of social thought" from those "pragmatic manifestations, or partial reconstructions of the themata in different forms and in the different spheres of everyday life" . As elementary dichotomies underlying the social debate around the A-bomb, we can tentatively infer two themata from the string of antinomies found in the post-visit interview. These two themata correspond to two dimensions -temporal and spatial-involved in the memory politics around the A-Bomb referred to above . The first thema revolves around the problematisation of the past and the future, and more particularly around the tension between a future-oriented recovery and the weight of a painful past in post-war Japan. Cutting through the SA's experience of the memorial -in front of the cenotaph, the A-Dome, and the Memorial Hall-we can find a sense of temporal duality which leads him to oppose the notions death and sadness and life and hope, which he associates with the future. This temporal duality translates into a spatial dichotomy underlying Hiroshima's urban renewal, giving rise to a second themata referred to the tension between the old and the new. This dichotomy -expressed at the beginning of the visit and in front of the A-Dome-is especially linked to the experience of border zones separating two different areas to which the SA associates with a sacred and a mundane world, respectively. Combined together these two themata help us to reflect on the problematisation of time and space in the city of Hiroshima. However, one might ask to what degree the existence of different urban topographies linked to dissonant temporalities, as Yoneyama puts it, helps us to further reflect on the A-bomb or, on the contrary, contributes to limiting the memories and the debate through its spatial and temporal containment within the memorial's boundaries. As Yoneyama warns us, "the containment of memories of destruction obscures other contemporary realities: namely, that the nuclear horror may in fact be present everywhere outside this museumized site, that the world may be thoroughly contaminated by nuclear weapons" . --- Conclusions: Beyond the Hiroshima Peace Memorial Park Through this article we aimed to address Tange's question about what crosses people's minds when visiting the HPMP. To be sure, a defining aspect of HPMP is its location, close to the explosion's hypocentre. When a memorial is constructed at the site of the traumatic events it is already affectively charged with the site's history7 . Despite addressing this question through a singlecase study involving three non-Japanese researchers , we can conclude by highlighting the enormous power that a place like the HPMP exerts on those who visit it, regardless of their background. As the SA com-mented at one point during the visit, when sharing his feeling that the memorial was speaking to him through the voices of the victims, "even though I am Chinese -and, you know, Japanese troops attacked China-, I feel that these people [the victims] could also be my grandma or grandpa telling me something". As a paradigmatic example of what Bull & Hansen call a cosmopolitan mode of remembering, the HPMP is inextricably linked to a transnational memory discourse anchored in a set of iconic artefacts -such as the A-Dome -for remembering the past vis-à-vis potential futures we would like to build or avoid as human beings. The global dissemination of images featuring the mushroom cloud, devastated cities, the nuclear reactor, or the radiation symbol have forever changed the way humanity imagines its future. As Jasanoff & Kim examine through the notion of sociotechnical imaginaries, imagination of possible futures -whether desirable horizons worth attaining or grim scenarios to be avoided-is becoming increasingly shaped by scientific and technological advances, such as the discovery of nuclear energy. Yet, imagination about the future is also largely dependent on how we remember the past, and vice versa . Thus, as Yoneyama rightly notes, the preservation of the A-Dome ruins, as an eternal reminder forever inscribed in the memory of humanity, "can be trusted only if one believes that the present state of things will remain in equilibrium" . In that respect, she asks, "Are visitors to the site prompted to wonder about the possibility of future similar destructions?" . While these ruins stand as iconic historic evidence of the atomic destruction, Yoneyama warns us that the musealisation and sacralisation of the A-Dome's ruins also contributes to bringing this building "into an ahistorical and almost naturalized past […] derailed from the secular course of history" . At the same time, this author goes on to say, "the Dome's stark contrast to its background scenery, a magnificently recovered urban space, assures people of today's peaceful, prosperous, and clean world" . Agreeing with the author, we think that the remembrance of the horror caused by the A-bomb should be present beyond the memorial's spatial and temporal boundaries, in the knowledge that the danger of nuclear destruction is still looming all over the world. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/. --- Data Availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. ---
Hiroshima Peace Memorial Park is widely known as a universal symbol of peace, but there have not been studies of how people actually experience and interpret it. This article presents a detailed case study of a visit to the memorial by using an innovative methodology based on the use of subjective cameras (subcams). Results show that despite the monolithic idea of peace that the memorial officially represents, it is experienced and interpreted in terms of a constant tension which exposes conflicts in post-war Japan memory politics. The dichotomies of war/peace, death/ life, past/future, and old /new emerge as part of the participant's encounter with different situations during his visit. This is particularly clear where he perceives border zones and points of intersection. The article concludes by interpreting these dichotomies through the notion of themata, as elementary dichotomies that underlie a social debate around a specific topic. Specifically, two themata are proposed: one revolving around the temporal problematisation of the past and the future in the memory politics of the A-Bomb, and the other revolving around the spatial dichotomy between the old and the new underlying Hiroshima's urban renewal.
Highlights • A lack of donated organs means 1,000 people in the United Kingdom die each year or are too sick to receive a transplant. Others are forced to lead lives severely compromised by their organ failure and the uncertainty of organ availability. • Improving the rate of bereaved families' consent could have a significant impact on the lives of many people. • Research carried out in the UK elicited bereaved families' experiences of organ and tissue donation, and perceived influences on their decision making. • Temporally interwoven experiences of Past, Present and Future appeared to influence families' decisions to donate organs of their deceased relative for transplantation. --- • The influence of temporality on donation-decision making is worthy of consideration in the planning of future education, policy, practice, and research for improved rates of family consent to donation. --- Introduction Currently there are over 7,000 people in the UK on the active transplant list; however due to a lack of donated organs, 1,000 people die each year or are too sick to receive a transplant [1]. Others will be forced to lead lives severely compromised by their organ failure and the uncertainty of organ availability [2]. In 2008, the UK Government Organ Donation Taskforce [3] recommended reorganisation of donation services, targeted at increasing organ donation by 50% in five years. Despite achievement of this target, further strategic work is essential to achieving improved rates of family consent [1]. Of continuing concern is the proportion of families who refuse to allow their relative's organs to be donated or overrule their relative's expressed wish to donate [1]. Further improving the rate of family consent could have a significant impact on the lives of many people and cost savings to the National Health Service versus alternative medical treatments. The present rate of family consent to donation in the UK suggests we are missing opportunities to support families in making a potentially lifeenhancing decision. This paper reports the findings of exploratory research carried out in the UK to elicit bereaved families' experiences of organ and tissue donation and their perceptions of how these experiences influenced donation decision-making. The study sought to build on previous evidence accrued by the research team: the influences on donation decision making [4]; the genesis of beliefs people bring to the donation discussion [5]; how people conceptualise the act of donation e.g. a 'gift of life' or a 'sacrifice' [6]; the decision-making process and bereavement issues [4] and any meaning-making of organ donation [6,7]. To set our UK study in the prevailing Western worldview, we undertook an integrative literature review [8]. The review involved thematic network analysis [9] comprising the development of three global themes of The Past, The Present and The Future [8]. These themes provided a concise temporal framework for the analysis and synthesis of new study findings. For the duration of the study, the legislative structure for organ donation in all four countries of the UK was that of a voluntary 'opt-in' system of explicit consent to donation. Family involvement is important to the donation process, and this is practiced for moral, ethical, legal and procedural reasons. However, the role of the family differs according to whether the donation intentions of the deceased are known [10]. Reported outcomes of the donation discussion depict a family decision to: agree or decline consent to donation in situations where there is no indication of the patient's wishes; support or overrule the expressed wishes of the deceased. --- Study design A qualitative, exploratory design was chosen to generate rich, informative data that would lend itself to theoretical propositions as to why bereaved families agree to organ donation from a deceased relative. All permissions for this study were granted. NHS approval was given by the UK Health Department's National Research Ethics Service, West Midlands-Black Country Committee, Reference 11/WM/0313. --- Objectives In the case of bereaved families who had donation discussed with them, specific objectives were to determine: 1. Families' perceptions of how their experiences of organ and tissue donation influenced donation decision-making: 2. Whether families felt their information needs about organ donation and bereavement were met and if not, what was missing. 3. Families' views regarding any public or private recognition of donors and their families . --- Participant identification and recruitment Ten NHS Trusts, representative of five regional organ donation services in the UK agreed to take part in the study. Meetings with regional and team managers of NHS Blood and Transplant and Specialist Nurses-Organ Donation led to the identification of suitable study sites. Geographical spread was deemed to be important due to potential differences in local hospital practices. SN-ODs sent a total of 99 recruitment packs to eligible participants on behalf of the research team. Recruitment was carried out in a serial manner, region by region. Purposive sampling gave preference to the most recently bereaved families. Our eligibility criteria of three to 12 months bereaved at the time of recruitment was consistent with previous work by Sque [11]. Of the 108 families who declined organ donation at the 10 participating NHS Trusts, 14 were asked if they agreed to be contacted about the research, and six families agreed. One family member did not receive information due to a change of address and the remaining five family members did not respond to our invitation to join the study. Further access to family members who declined donation included ethical approval to extend the number of recruitment sites from 10 to 12. Two NHS Trusts proposed the identification of eligible participants via the SN-OD in association with the Trust bereavement service. This resulted in the implementation of a retrospective recruitment strategy involving the dissemination of 10 recruitment packs to eligible participants. However, we experienced zero family response to our invitation. --- Data collection Semi-structured, audio-recorded interviews offered participants the opportunity to give an Once the research team received confirmation from a family member that they were willing to join the study, they were contacted by their preferred mode and a convenient date and time for the interview was arranged. Twenty-six interviews were carried out face-to-face and four by telephone. One family member provided a written response to the topics covered in the interview guide, having expressed this preference. Most interviews took place in the home environment. The interviews were mainly between one to three hours. On completion of the interview, the researcher arranged a convenient time to telephone the participant to check on any issues the interview may have raised and to answer any questions. Participants were offered written information of avenues for support if they thought it helpful and/or directed to appropriate professionals to discuss any issues of concern. All participants were sent a personal 'Thank You' letter and offered an executive summary of the investigation. --- Data analysis Audio-recordings were transcribed verbatim and checked for accuracy while listening to the audio-recording. Listening to, and reading the transcripts facilitated recognition of important ideas and patterns, such as sequencing or repetition of experiences. Transcripts were imported into a qualitative software package for security and to facilitate analyses. Data collection and analysis was carried out iteratively. This entailed reflection on data already collected and the application of emergent ideas to re-focus the interview guide [13]. Qualitative content analysis, involving a directed approach to the interpretation of textual data [14] was the selected method of analysis. This involved a systematic process of applying predetermined codes to the text and categorising the data into themes. The coding framework was based on themes developed from an integrative literature review [8] . Crossreference was made to the study objectives to ensure the coding framework would support the identification of relevant text. Transcripts were coded as individual units, followed by inter-case analysis. An inductive approach alongside deductive analyses facilitated new insights. Data that did not fit with an existing code were labelled separately and further analysed. This resulted in two new organising themes; Forms of recognition and Perceived outcomes. --- Findings Global and organising themes ( --- Global theme -The Past The will of the deceased person Most participants suggested that they were aware of their relative's wishes regarding donation. Prior knowledge of their relative's desire to donate was mostly confirmed by possession of a donor card and/or evidence of having joined the NHS Organ Donor Register. Deciding to donate when applying for a driving licence or when making a lawful will were other ways in which participants expressed understanding of their relative's wishes: 'I actually think it was the fact that the card existed was the thing that actually clinched it, not any other persuasive arguments.' Decision making for family members was also supported by a belief that they were acting in accordance with their relative's personality; attributes such as helpful, kind, giving, social, compassionate, and caring. Take for example participant 018 who said her daughter; '... cared about people. She cared about animals and different things, so why shouldn't she care about ... an opportunity to help somebody else'; participant 003 who said her partner would have helped anyone in life so questioned; '... why not in death?' Motivation to fulfil the wishes of the deceased relative was a key influence on family members' decision to donate. Many participants acknowledged their deceased relative as the decisionmaker and portrayed themselves as the person responsible for fulfilling their wishes. There was also a sense of fait accompli in participant descriptions, attributed to knowing or believing that donation was their relative's choice: 'It's very straightforward. She wished it and we did it. As simple as that.' Family members also demonstrated respect for the wishes of their deceased relative when confronted with their own personal reservations about donation. A mother expressed mixed feelings when approached about donation and initially said, no; 'I was shocked when she [ --- Predispositions of family members Participants disclosed a range of pre-conceived attitudes and beliefs about donation. The nature of experience ranged from immediate family situations, through to less personal circumstances of knowing a transplant recipient and through professional work. The following extract illustrates the potential for decision making to be influenced by previous experience: 'I didn't really want the eyes to go, but being an ophthalmic nurse, I thought of all the things for me to say no to, that someone might benefit from corneal transplants.' Most participants gave indication of their own expressed intention to donate and in some cases, referred to the affirmative decision of other relatives. While some participants favoured an 'opt-out scheme' an alternative opinion was that its introduction would; '... destroy the transplant world simply because nine out of ten will opt out immediately' . --- Global Theme -The Present --- Intra/Interpersonal determinants For all participants, donation decision making took place in the context of a sudden and unexpected critical illness or event. Participants described how the initial stages of the illness or event unfolded and provided detail of the circumstances surrounding their relative's death. A key experience for many was the sudden onset and absence of any warning signs; 'I had about a minute ... where I sensed something was wrong ... But that's all. There was no warning' . Other participants described their relatives' sudden death as; 'like a shutter being brought down' or; 'like a candle being blown out' . Protecting the deceased person's body was an important issue for many participants. Family members and friends were identified as a major source of support for families during their experience. Decisions about donation were most often made as a family: 'I know myself, my sister and my dad were completely on board with it' ; 'we were always a united front' ; 'nobody was against the decision' . --- Comprehending the situation Most participants reported satisfaction with the information they received about their relative's condition; this being clear, direct, honest and without false hope. Participant descriptions portrayed insights into the criticality of the situation and understanding of the nature of their relative's illness/injury. Use of the terms 'brain dead' or 'brain stem dead' suggested understanding that death had occurred. A number of participants however, described their difficulty in equating death with the appearance of their relative: '… They told me the machine was breathing for her, but the machine was breathing for her yesterday, and she's still breathing, and that stupid bit of hope and you think someone made a mistake and she'll be okay and she'll wake up.' Families who agreed to DCD indicated understanding about the process of treatment withdrawal, and appeared satisfied with the information they received about this. Descriptive accounts suggested that treatment withdrawal took place in the environment where their relative was receiving care. An exception to this was patient and family transfer to an anaesthetic room which was remarked upon as being; 'So peaceful, so quiet' . It was apparent that some participants had an awareness of a time limits: after treatment withdrawal and death for organ donation to proceed; 'it did depend on how long it took the heart to stop beating' , and for saying goodbye to their relative immediately after death; 'the moment she died she would be whisked away to theatre' . The significance of informing families about possible timescales was highlighted by a participant who experienced non-proceeding DCD: 'Unfortunately for us [M] didn't die; that sounds terrible again, but she didn't die within the two hours, so they couldn't go ahead with the kidney donation. But because we knew we had that timescale to work within, we knew after two hours that it wasn't going to happen. So yeah from that point of view it was good to know about the timescale.' --- The donation discussion Participants' accounts revealed considerable variations in practice regarding the timing of approach about donation. A participant who was informed about a decision to withdraw lifesustaining treatment together with a request for donation said; 'I thought it was a perfectly sensible thing to do ... I saw no problem with it at all. I think the two things should be integral'. . Alternatively, a participant approached about DCD said: 'I do remember thinking that this was happening all too quickly ... and I think that was part of the grieving process in that; 'wait a minute. Hang on a second. She's not dead and we're whipping bits out of her.'' Participants' descriptions indicated variable practices regarding the request-approach. A formal approach involving a meeting with the legal next-of-kin and significant family members was the most common method. The professional identity and number of staff present at the time of the request suggested a collaborative approach on seven occasions, i.e. the SN-OD and medical consultant working together. The discussion usually took place in 'a room', although the setting was not always deemed fit for purpose: 'I think we were in an office … which was very cramped and not conducive to that kind of atmosphere.' In contrast, five family members raised the issue of donation themselves. One family suggested the doctor's response was; 'Oh I'm so glad you've brought that up ... It saves the difficult conversation' . Contrary to family members pre-empting the question, it was apparent that some participants were reliant on the staff to enquire, for example; '... until they asked us, it never occurred to me' . Participants described a range of emotional reactions to the approach including anticipation; 'I was waiting for this' ; shock; 'I was totally shocked. I never expected it because I was quite convinced you see he was going to wake up' and surprise; 'I just thought it was so quick. One minute she's in hospital the next thing they're asking me for organ transplant' . A perceived lack of prior knowledge and understanding contributed to the reactions of one family; 'If I'd known more about it [donation] then it would have been less of a shock' . Participants most often recalled being approached by a member of the healthcare team caring for their relative, although participants could not always specify the role characteristics of the staff involved. Alternatively, the question was posed by a member of staff affiliated with organ donation. This latter person was rarely referred to as a SN-OD. On one occasion, the family member thought a counsellor was present, only to realise at a subsequent meeting that this was; 'the donor nurse [SN-OD]' . Personal attributes of the requestor such as calm, gentle, neutral, very kind, very nice and polite were positively remarked upon, and satisfaction with the sensitivity of the approach was expressed. Some participants were sensitive to the feelings of staff involved in the approach to bereaved families about donation: 'I think you've got to be special people to do that sort of thing … I mean you'd have a script I suppose in your head, but still it must be … difficult.' --- Patient and family care Most participants appeared to have a high level of confidence in medical and nursing staff expertise, and were mostly full of praise about the specialist care given to their relative and to themselves. Family satisfaction was reflected in expressions such as: I/we couldn' There did not appear to be any uniform standard of provision for relatives of critically ill patients, including facilities for retreat, rest, sleep, hygiene, and refreshments. Practices also varied from hospital to hospital in relation to visitation policies, restrictions on the number of people at the bedside, and car parking concessions. Accommodation for some families was limited to a waiting room that they shared with other relatives, whereas others had access to a private room during the day and/or overnight. For one participant, overnight accommodation involved payment for a room that he was required to share with a stranger. Waiting areas, seating and refreshments were identified areas for improvement. However, this did not seem to detract from participants' overall satisfaction with the care they and their relative received. Despite restricted visiting in some hospitals, participants indicated opportunity to spend time with their relative and were keen to point out how staff could be accommodating. One family suggested; 'somewhere private to reflect and grieve' was most helpful during their hospital experience. In contrast, a participant described their experience in the communal waiting room as: Treating the deceased donor with respect and dignity was an important care issue for some families. Knowledge of SN-OD presence during organ retrieval appeared to provide reassurance: '... 'She said I'll be with him every step of the way, when he goes down for surgery I'm there; 'I see the surgery right through to the end.' And that was a comfort to know that she was going to be there.' Global Theme -The Future --- Hopes and expectations Some family members perceived consent to organ donation as giving meaning to the life and death of their relative. Through donation, participants felt that their relative's death had not been in vain and conversely, their life had not been wasted: '... something positive was going to come out of such a tragic event.' . Some families pragmatically accepted the outcome of non-proceeding DCD, whereas others expressed disappointment and deflation: 'It was sort of that feeling that you'd lost the ability to get something from ... It all just seemed completely futile ... No positivity from it at all.' --- Forms of recognition Participants disclosed a range of views when questioned about the acknowledgment of donation. Some saw public recognition as a way of promoting donation and for that reason were supportive of it. Participants who were in favour of public recognition spoke of it being a nice or lovely idea. One participant was keen to point out; 'I haven't done anything. It's not me. It's Mum that's done it, so the only personal gratification ... A nice honour in Mum's memory really, isn't it?' . Another participant spoke of recognition in the context of donation as a personal sacrifice: 'They've given their life up haven't they or you feel that your loved one has given their life up? They've given something back … They should be recognised for that.' Participants identified tributes to their deceased relative outside the context of donation, such as a personalised key ring for family and friends, a commemorative bench and the planting of trees, the development of a webpage, a book of remembrance and a memorial trophy. Aligned with the decision to donate, participants identified forms of public and private recognition, including a memorial book in the hospital that would be open to the public, and a cathedral service for donor families. Many participants discussed and/or shared letters about the outcome of their relative's donation decision, and for some, a letter or card from recipients suggested recognition. --- Perceived outcomes Several families said the decision to donate had helped them in their bereavement, and gave indication of the reasons why. For example, there was evidence of personal gain through: the knowledge that donation had benefitted people; a belief that the deceased person 'lives on'; an opportunity to turn a profoundly negative situation into something positive; personal acceptance of death and bereavement, and a feeling that death was not in vain. No participant regretted the donation decision they made at the time of their relative's death. This was affirmed in statements such as 'it was the right thing to do' or 'the right decision'. There was evidence to suggest that the donation intentions of family members' and others who were known to the deceased person had changed because of their experience. One participant explained how he went home that night and at 6am; '... registered online, including my eyes' ; a parent said so many of her daughter's friends had joined the organ donor register; 'oh I've been on there and I've ticked the box' and a father suggested; 'it's opened everybody's eyes now to the possibility' . --- Discussion This study sought to elicit bereaved families' experiences of organ and tissue donation and their perceptions of how these experiences influenced their donation decision-making. We highlight important findings associated with past, present, and future dimensions of the families' temporal landscape. --- Global Theme -The Past Most families suggested that they were aware of their relative's wishes regarding donation; a known predictor for family consent [15]. Determination to fulfil the wishes of the deceased was apparent when confronted with situations that threatened to overrule the prospect of donation such as interference by family members, the coroner, or the police. Participants disclosed a range of pre-conceived attitudes and beliefs that had the potential to negatively impact on the donation decision. It was also notable that some families disclosed a lack of knowledge about donation. The reported issues indicated a need for increased public knowledge about the donation process and campaigns to raise awareness that overcome the vagueness [16] about donation and which organs and tissues may be offered for donation, the intended outcomes of donation and the mode of death which permits this to happen. --- Global Theme -The Present In this study, the moment in time when families experienced their relative's critical illness was characterised as fluctuations of hope and despair, in which the option of organ and tissue donation appeared to assist families in their grief. Families appeared intent to turn a profoundly negative situation into something positive, and in doing so, embraced hope at the end of life [12]. Decisions about donation were most often made as a family. The receipt of clear, direct, and honest information appeared to prepare families for the catastrophic nature of the illness/injury and the reality of impending death. This was an important finding given the potential for non-donation linked to a lack of knowledge and/or understanding about the patient's illness and prognosis [17,18] and false hopes about their recovery [18]. Most families, as also found by Morgan et al. [7] reported satisfaction with the quality of information they received about their relative's critical illness/injury and prognosis. This appeared to prepare families for the reality of impending death of their relative. Families' explanation and understanding of the criteria used to confirm death was variable in terms of detail and accuracy; a factor that has been linked to families who decline organ donation [19]. This was most notable in cases of DBD. Families expressed satisfaction with the sensitivity of the approach and the requestor; two important variables that are known to influence the decision to donate [6,20,21]. The facilities within specialist areas were not always deemed to be conducive to the sensitivity of the donation discussion, and a lack of privacy was an issue for some grieving families. The use of a 'questionnaire' or 'checklist' which formed part of the consent process was also distressing for some families, particularly in relation to the itemisation of body parts. In one case, this resulted in the donation of fewer organs and tissues than intended at the outset. Our findings concur that the donation discussion may be enhanced by improving aspects of family care and provision [21]. Consistent with previous findings [6,22], protecting the deceased person's body was an important issue for potential donor families, including identified perceptions of violation, mutilation, and prolonged suffering [6,23,24]. These concerns were seemingly dealt with by families in our study, as all agreed to donation. Rationalisation has been identified as a coping mechanism that is helpful to families in receipt of the diagnosis of brain stem death [19]. Secondary analyses of the study findings could help to develop this theory further and ascertain its relevance to circumstances of DCD. Treating the deceased donor with respect and dignity and SN-OD presence during organ retrieval were important care issues that appeared to allay families' anxieties. Personal beliefs, fears and concerns did however lead to the non-donation of specific organs and tissues, most notably the eyes, and in two cases, limited donation for transplantation only. Some families explained the non-donation of eyes for personal reasons associated with significance. There was however, an apparent a lack of understanding about removal of the whole eye or the cornea for transplantation. This finding has implications for enhanced information that transmits the precise nature of the eye donation operation. Few families who agreed to DCD gave indication of being present at the time of treatment withdrawal, but most appeared to understand what this entailed. Conversely, knowledge of possible timescales and their implications was variable. Adding to the complexity of DCD is the knowledge that unless cessation of heartbeat occurs by a pre-determined point after treatment withdrawal, donation will not be possible [25]. The study findings suggest the importance of reinforcing this information for families and assessing their need for support, especially in situations of stand-down or when death does not occur within an appropriate timescale for donation to proceed. Families were approached about donation at varying points during their relatives' illness. Consistent with the findings of Siminoff et al. [26], shock or surprise was associated with increased deliberation and the potential to decline donation. Families most often recalled being approached by a member of the healthcare team caring for their relative or by a member of staff affiliated with organ donation. A collaborative request was less evident. In most cases, the approach involved a formal meeting with the legal next-of-kin. The personal distress associated with a critical, life-threatening event meant that some families overlooked the possibility of donation and were appreciative of staff that brought this to their attention. These findings support proposed action to increase consent to donation through a standard of best practice for the family approach [27] and potential strategies that could improve the deceased organ donation process for families [10]. Timely identification and referral of every potential donor to the SN-OD may also realise an increase in deceased donation through improved collaboration [28,29]. Our findings suggested an association between positive family care experiences and consent to donation. As recommended by the National Institute for Health and Clinical Excellence [30] further research is needed to confirm this assumption. There were many examples of personalised patient and family care that contained the quality hallmarks of compassion, respect, dignity, and skilled communication. Effective communication during the donation process appeared essential to maintaining families' commitment to donation. The concept of 'waiting' was an identified feature of families' experiences along the continuum of care; a contextual factor attributed to non-donation [31]. The length of time it took to donate was distressing for some families, and the need for regular updates from the SN-OD was an identified area for improvement. There did not appear to be any uniform standard of provision for families of critically ill patients, and families perceived a difference in the standard of care delivered in specialist and generalist areas. Visitation policies also varied from hospital to hospital. Components of care and communication in the post-donation period suggested inconsistent practice. Quality follow-up can contribute to improved understanding, recognition, and reconciliation for donor families [32]. --- Global Theme -The Future Family consent to donation appeared to give meaning to the life and death of the deceased person, and for some families, was associated with a belief that their relative would 'live on' through the recipient. Generally, more participants were against any form of public recognition than in favour of it. Donation was viewed as a selfless act, for which families did not expect acknowledgement. Some families saw public recognition as a way of promoting donation or as a tribute to the deceased and for these reasons were supportive of it. The experience of donation positively influenced the donation intentions of family members and others who were known to the deceased person. Families provided evidence of personal gain through the act of donation. Consistent with previous research [33][34][35], this included perceptions of a positive impact on their grief and bereavement. Recommendations for future research -A prospective, ethnographic, observation study to further our understanding of the minutiae of the dynamic interaction at the time of the approach and discussion about organ donation. research knowledge and skills to a level that promotes commitment and facilitates engagement. The safe storage of personal and case-related data of families who declined donation would enable the seemingly more favourable method of retrospective recruitment to prevail in future research. An alternative route of access to this coveted population could also be considered, for example, through the study of suddenly bereaved families' experiences of end of life care. The Temporal Framework of Past, Present and Future, we believe, provided a unique lens to the interpretation of bereaved families' experiences of donation. The findings make an important contribution to the body of knowledge available in the UK at a time of static rates of family consent to donation [38]. --- Conclusion This exploratory research has provided a state of the art temporal understanding of bereaved families' experiences of organ and tissue donation and the perceived influences on their donation decision-making. Improving family consent to donation is essential to ensure that as many people as possible receive the transplant they need. The influence of temporality on donation-decision making is worthy of consideration in the planning of future education, policy, practice, and research. --- Conflicts of interest None. --- -Exploration of staff and family experiences of the DCD pathway to further inform potential donor and family care, and the impact of proceeding and non-proceeding DCD on family grief and bereavement. -Causal research to test for an association between a positive family care experience and consent to donation. --- Critique of the study We have reported the experiences and outcomes for a sample of 31 donor families who gave consent to donation. The design feature of data saturation as an indicator of sampling adequacy was not applied in this study for pragmatic reasons including pre-determined funding and timescales for completion of the work. The research should therefore be viewed within the constraints of the purposive study sample and size. Participation was voluntary and the methodological constraints through self-selection are acknowledged. An acceptance rate of 32% is consistent with bereavement research and this type of participant [11]. Our eligibility criteria of three to 12 months bereaved at the time of recruitment resulted in a mean length of time since the donation event of 7 months. The potential for recall bias is therefore a further limitation of this retrospective study. An ethical duty of care is paramount in bereavement research, and can reconcile tensions in the study design. Based on available data for recruitment to bereavement research, a sample of 108 families who declined donation during the study period should have supported the recruitment of 30 families, as planned. Challenges associated with the implementation of a prospective recruitment strategy were keenly observed, resulting in an inadequate study sample. We therefore acknowledge that our conclusions may have been different had the study included a comparison group of declining families. Achieving national targets for donation [1] hinge on an understanding of what is driving family refusal, as this remains a key area of organ loss. Developing an evidence-base can be strengthened through academic and clinical collaboration [36,37]. However, for this to happen, support for SN-OD involvement in research activity needs to be balanced with service and clinical demands, together with a repertoire of
Purpose: To elicit bereaved families' experiences of organ and tissue donation. A specific objective was to determine families' perceptions of how their experiences influenced donation decision-making. Methods: Retrospective, qualitative interviews were undertaken with 43 participants of 31 donor families to generate rich, informative data. Participant recruitment was via 10 National Health Service Trusts, representative of five regional organ donation services in the UK. Twelve families agreed to DBD, 18 agreed to DCD, 1 unknown. Participants' responses were contextualised using a temporal framework of 'The Past', which represented families' prior knowledge, experience, attitudes, beliefs, and intentions toward organ donation; 'The Present', which incorporated the moment in time when families experienced the potential for donation; and 'The Future', which corresponded to expectations and outcomes arising from the donation decision. Results: Temporally interwoven experiences appeared to influence families' decisions to donate the organs of their deceased relative for transplantation.The influence of temporality on donation-decision making is worthy of consideration in the planning of future education, policy, practice, and research for improved rates of family consent to donation.
Introduction Across societies, individuals and communities face challenges in terms of maintaining cooperation, deterring free-riding on public goods and ensuring adherence to social norms [1][2][3]. Theoretical models and experiments have shown that punishment via the selective imposition of costs on non-cooperators and norm violators can support the evolution of human cooperation [4][5][6][7][8]. In experimental settings, individuals punish offenders even at a personal cost, though there is substantial cross-cultural variation in punishment norms [2,9,10]. That said, multiple complementary mechanisms have been proposed to explain the evolution of cooperation, including reputation-based indirect reciprocity [11,12] and partner choice [13][14][15]. Experiments have provided evidence for these mechanisms in action showing that gossip and ostracism can promote cooperation [16][17][18][19][20], perhaps more efficiently than punishment . This review aims to contribute to understanding the unique antecedents and consequences of the various punishment and reputation-based tactics that humans use to intervene against non-cooperators and norm transgressors. Based on prior work [5,25,26], we define punishment as a response to an offence via inflicting some costs on the offender 1 . While punishment might be aimed at changing an offender's behaviour, we do not consider deterrence as a necessary component of its definition. For example, punishers can aim at reducing disadvantageous inequality or creating advantageous inequality without deterrence [30,31], and they can reap reputational benefits independent of any recalibration of offenders' behaviour. Moreover, we use an inclusive definition of punishment that considers a host of tactics used to inflict costs on offenders, some of which require punishers to pay significant short-term costs, while others are less costly. Based on the costs as well as the benefits of different tactics, we distinguish between two broad categories of punishment: direct punishment, which involves physical and verbal confrontation 2 , and indirect punishment, which involves gossip and ostracism. Punishing via direct confrontation has high costs-in terms of energetic expenditure, an increased risk of retaliation, and negative reputational consequences-but may also produce substantial benefits. Directly confronting offenders is more effective at changing their behaviour in ways that fit punishers' interests [32,33]. In the context of status competition, direct confrontation may also bring some reputational benefits when there is value in building and maintaining a reputation of being a tough bargainer . By contrast, indirect punishment tactics have lower costs [37,38], because they allow punishers to remain anonymous and minimize the risk of retaliation. However, using indirect tactics of punishment is less effective at changing offenders' behaviour, partly because offenders are unable to identify which of their behaviours has evoked punishment. Nevertheless, gossip and ostracism can impose significant reputational and relational costs on offenders [37,39]. That is, offenders who are gossiped about tend to acquire a negative reputation, and thus are less likely to attract potential coalitional partners in future social interactions. Similarly, offenders who are ostracized suffer costs in terms of losing potentially valuable interaction opportunities. Although field studies in small-and large-scale societies point to the key role that indirect, reputation-based tactics like gossip and ostracism play in promoting cooperation or enforcing norms [40][41][42][43], the experimental literature has overwhelmingly focused on direct punishment via economic sanctioning. This focus can limit the ecological validity of research findings because it remains unclear which real-world behaviours are captured by standard operationalizations of punishment in laboratory experiments and many frequent, consequential, but low-cost forms of intervention against offenders are often neglected. Here, we propose a framework that integrates a larger breadth of punishment and reputation-based tactics used to intervene against offences. We suggest that the typology of intervention tactics we use here has the benefit of bridging strands of research on direct confrontation, gossip and ostracismbehavioural phenomena that have often been studied separately. Considering the multiplicity of tactics that humans have available when deciding how to punish, along with the functions they serve and the mechanisms that motivate them , highlights directions for future research on intervention against offences, in the context of partner recalibration and partner choice, within interdependent relationships and social networks, and in daily life settings . --- Common and unique social functions of distinct punishment tactics Theoretical accounts of direct reciprocity, indirect reciprocity and reputation-based partner choice suggest that multiple mechanisms can effectively promote and help sustain human cooperation [27,44,45]. Mapping onto these accounts, empirical work has shown that people use a variety of tactics-direct confrontation, gossip and ostracism-in response to non-cooperation and norm violations in real-world settings [40,43,46]. For example, a study of responses to norm violations in a laboratory setting [47] found that around onequarter of witnesses directly intervened against a confederate who engaged in theft . A field experiment by Balafoutas et al. [50] found that a similar proportion of third-party observers directly punished littering in a public space, though this rate of direct punishment dropped substantially when observers could indirectly punish the transgressor by withholding help. Consistently, a recent longitudinal study in daily life [51] showed that people intervene against offences via various tactics, with gossip being the most frequent response, followed by direct confrontation, and social avoidance 3 . Together, these findings highlight the importance of studying the use of indirect reputation-based tactics alongside direct punishment tactics, to better understand how people intervene against offences in daily settings and identify which goals punishment achieves. Tactics of direct and indirect punishment are posited to serve similar broad functions: promoting cooperation, competing for resources and/or status, and reducing inequality . Seminal experiments have shown that punishment can be used to promote cooperation [6][7][8], though often at the expense of efficiency . More recent experiments have instead demonstrated that punishment is, in many cases, motivated by revenge , status concerns [55] or aversion to inequality [30,56]. Traditional views of gossip and ostracism have emphasized the dark side of these tactics, seeing them as means to indirectly aggress against peers [37,39], and to impose status costs via reputation manipulation . Recently, though, researchers have proposed broader conceptualizations of gossip that highlight its potential to strengthen social bonds and promote cooperative behaviours [38,[61][62][63]. In a similar vein, and despite research traditionally focusing on the negative emotional and social consequences of ostracism [64], experiments show that opportunities to choose some partners and exclude others can effectively promote cooperation [17,23]. Although confrontation, gossip and ostracism can be used to achieve similar goals, each of these tactics has unique benefits and costs and may be additionally tailored to serve unique functions, which we articulate below. The unique benefits and costs of direct confrontation Among the repertoire of available responses to offences, direct confrontation seems better tailored to recalibrate offenders' current and future behaviour, in ways that benefit the punisher [32,33]. This is because physically or verbally confronting offenders is the most immediate and effective way royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 376: 20200289 to stop ongoing transgressions, and it allows the punished individuals to draw explicit links between instances of inappropriate behaviour and the elicited punishment. This is not the case when offences are met with gossip or ostracism, because these tactics do not convey information directly to the punished individuals about what they did wrong. By contrast, verbal confrontation not only imposes costs on the offenders, but can communicate valuable information to them [65,66]. For example, verbally confronting offenders can indicate which behaviours are perceived as offensive, how victimized parties are affected and how the offenders should change their behaviour to signal that they care about the punishers [67-69] 4 . Finally, compared with other intervention tactics, confrontation might be better suited to achieve retributive goals. Retribution involves a desire to balance or repay harm in a way that is proportionate to offence severity [72]. Arguably, when using direct confrontation against offenders, punishers have more control over the immediate outcomes of their behaviour, and they can adjust their responses more easily to fit the severity of the offence [73]. By contrast, the outcomes of gossiping about an offender are often delayed and more uncertain, and the spread of information shared via gossip may be harder to control. In sum, direct punishment via physical and verbal confrontation seems particularly well tailored to achieve recalibration and retribution goals , as compared with indirect reputation-based tactics to intervene against offences. Notably, although directly confronting offenders can benefit punishers both in the short term, by putting an end to ongoing transgressions, and in the long term, by recalibrating offenders' future behaviour to fit the punishers' interests, it comes with substantial costs. Direct confrontation requires time, effort and energy; it bears the risk of counter-punishment and feuds [75,76], and it can result in incurring reputational costs . --- The unique benefits and costs of gossip and ostracism Compared with direct confrontation, reputation-based tactics of gossip and ostracism seem well suited to minimize the risk of retaliation when intervening against offences, by obscuring the punishers' identity. However, as mentioned earlier, gossip and ostracism seem less effective at changing offenders' behaviour, compared with direct confrontation. If gossip and ostracism are not primarily aimed at recalibration, what can they accomplish? First, gossip plays a key role in facilitating cheater detection and partner choice. Indeed, people share and use reputational information to selectively cooperate with partners who have positive reputations [77][78][79][80][81] and avoid partners who have negative reputations [17,81,82]. Further, although gossip itself may be less effective at changing targets' behaviours, the mere threat of being gossiped about by others motivates people to strive to build and maintain good reputations [16,[18][19][20]. Second, gossip is ideal for communicating about norms of acceptable behaviour. It allows people to probe and safely test the limits of conventions, norms and prescriptions, and facilitates the formation of strategic coalitions around moral values that fit personal interests and/or group interests [85]. Compared with other intervention tactics, gossip thus appears better suited to achieve general deterrence goals and it can be used as a sanction against norm violations . General deterrence involves a desire to prevent future offences, not only from the same perpetrator, but also from third parties [72,86]. Gossip can help achieve general deterrence by allowing people to coordinate with third parties and recruit punishment from them [82,87], lowering the otherwise high costs of uncoordinated direct punishment . Finally, gossip may represent one way for individuals to take revenge by imposing strong symbolic costs on an offender, while reinstating their own image in the eyes of their community [54,74]. Compared with gossip, ostracism can be more costly, especially if used against valuable relationship partners, because it can result in missed interaction opportunities and severed social ties [51,68]. Nevertheless, when used against partners with a lower relationship value, ostracism can achieve multiple goals. Given that there are cognitive and time constraints on the number and closeness of one's social relationships [91,92], the avoidance of offenders allows people to direct attention to more valuable social relationships. Indeed, reviews of the ethnographic record suggest that ostracism or avoidance is a common tactic to deal with norm violations [40,41,43], which limits the risk of conflict escalation. Finally, ostracism may be the most cost-effective way to incapacitate repeat offenders. --- Information-processing mechanisms underlying distinct punishment tactics Considering the differential costs and benefits associated with direct confrontation, gossip and ostracism can be used to develop hypotheses about the putative information-processing mechanisms underlying direct punishment and indirect reputation-based tactics. Upon experiencing a norm violation, individuals first need to make decisions about whether to intervene or not and assess which tactics are available to them. If they decide to intervene, one possibility is that they then use whichever tactics they have available in an unconditional manner. An alternative possibility is that, upon deciding to intervene, people condition their choice of specific punishment tactics based on various situational factors. If so, what are the decision rules that they use to determine how to intervene against offences, when both direct and indirect intervention tactics are available? Following previous research on punishment in daily life [51], we propose that individuals should consider and integrate two types of information when deciding on which intervention tactic to use: information about the benefits of recalibrating offenders' behaviour, and information about the costs of being targeted by retaliation. We expect that, when the benefits of changing offenders' behaviour are high, people will upregulate their use of direct punishment tactics; by contrast, when the costs of potential retaliation are high, people will upregulate their use of indirect, reputation-based tactics. --- Factors that shift recalibration benefits We first consider several factors that can shift the benefits of punishment in terms of changing offenders' behaviour. One key factor that determines the benefits of intervening against an offence is the extent to which it has been personally harmful . All else being equal, individuals have more to gain from deterring current and future offences that are harmful to themselves. Indeed, multiple vignette studies have experimentally manipulated the self-relevance of offences and found that people respond differently to violations victimizing themselves compared with those victimizing third parties. Specifically, offences that are personally harmful are met with more direct, confrontational punishment , whereas offences that victimize third parties are preferentially met with indirect, less costly punishment tactics . Experience sampling studies on punishment in daily life settings [51,97] have found similar patterns, suggesting that self-relevant offences evoke stronger desires to punish offenders, with whichever means possible, whereas other-relevant offences are preferentially met with indirect punishment, via gossip or ostracism. Importantly, most research to date has compared how people punish offences that harm themselves with how they punish offences that harm strangers. However, in realworld ecologies, people interact and experience offences within diverse social relationships with kin, friends, allies, ingroup members and outgroup members. Considering the relationship context in which offences take place is key for improving the ecological validity of research on punishment [51,52,98], and specifically for drawing accurate conclusions about the prevalence and use of punishment in the field. Indeed, recent work has taken promising steps in this direction, showing that people condition their punishment tactics on their relatedness and emotional closeness to victims. Consistent with the idea that people have more to gain from deterring offences that are harmful to interdependent others , offences against close relationship partners evoke similar responses to self-relevant offences, eliciting more costly confrontation than offences against strangers [93,99]. Moreover, offences harming kin are met with harsher punishment than offences harming friends, pointing to the role of relatedness and special obligations towards family in determining punishment [93,100,101]. In a similar vein, people might condition their punishment tactics on their relationship with offenders, especially when violations are self-relevant 5 . Specifically, individuals may prefer directly confronting offenders whom they value highly rather than gossiping about them or ostracizing them [51]. This prediction is based on several reasons. First, there is more to gain from adjusting the behaviour of highly valued partners with whom one shares future interdependence. By contrast, if there is no expectation of future interactions with offenders, little can be gained by investing time and effort to recalibrate their behaviour. Additionally, there is less uncertainty about how close others will respond to punishment. Finally, gossiping about valued partners can backfire if they find out, while ostracizing them can damage otherwise important social ties [58,68]. Importantly, additional prescriptions to intervene against offences perpetrated by one's family or allies may apply in societies with strong kinship ties and norms of corporate responsibility [103]. --- Factors that shift retaliation costs When making decisions about how to intervene against offences, people should not only consider the potential benefits in terms of recalibration, but also weigh the costs of receiving retaliation from offenders and their allies. Arguably, such costs of intervention differ depending on the severity of offences, with more severe offences being associated with a higher risk of retaliation. This is because offenders who have engaged in more morally wrong or harmful offences may be perceived as more willing and able to retaliate if punished. Nevertheless, previous work has found that costly punishment increases with the severity of offences, with people imposing harsher punishment against transgressions that are perceived as more severe [3,72], or transgressions that deviate more from group norms of cooperation [6]. However, studies documenting how people use a broader array of intervention tactics in the field reveal that severe offences are more often punished royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 376: 20200289 indirectly via gossip, ostracism or withdrawal of help [51,104]. Another, more direct cue for assessing the risk of retaliation is the victim's relative power compared with that of the offender. Power can take many forms, including one's privileged access to resources and the provision of benefits and costs, one's asymmetric control over one's own and others' outcomes, one's influence derived from prestige, and one's formidability based on one's strength or other physical attributes [105][106][107][108]. Individuals who experience high power relative to offenders-whatever the basis of this power-may be more willing to engage in direct, confrontational punishment [109][110][111][112], because they can afford the risk of retaliation. By contrast, individuals who find themselves in an unfavourable power position relative to the offenders are expected to be more cautious against potential retaliation. Consistent with these ideas, people who feel less powerful are more likely to respond to norm violations by gossiping or avoiding the offenders, rather than by directly confronting them [51]. Gossiping about transgressors also allows individuals who are less powerful to recruit punishment from third parties [40,82,87], potentially reducing individual costs of punishment and the risk of retaliation from powerful others. In addition to the factors discussed earlier, the presence of bystanders may also influence the costs of third-party intervention against offences, and thus the likelihood of intervention. For example, contrary to the notion of diffusion of responsibility, a quasi-experiment conducted on a train suggests that the silence norm is more likely to be enforced when there are more passengers in a train car [49]. This could be because punishers expect others to take their side if the situation escalates, such that there are lower retaliation costs particularly when there are more bystanders present. Such a situation represents a volunteer's dilemma in which a single individual can maintain the public good of silence in the train by punishing the norm violator . Future research needs to consider how the presence of others influences not only the probability that someone intervenes, but also the use of specific types of intervention tactics. It is plausible that in situations that resemble the volunteer's dilemma, individuals observing a norm violation first use gossip to coordinate and then rely on only one person to directly confront an offender. --- Emotions as proximate motivators of punishment So far, we have focused on the cognitive processes-whether conscious or unconscious-underlying decisions about how to punish offences. Importantly, though, punishment is often motivated by negative emotions, including anger, disgust and contempt [97,[114][115][116]. Recent work has emphasized that different emotions may serve unique social functions [68,96,117], with anger and disgust motivating distinct responses to offences. While anger is associated with approach-oriented, aggressive behaviours [69,118,119], disgust has been seen as motivating social avoidance [96,117,120] and efforts to signal condemnation to third parties [94,121]. Consistent with these ideas, multiple vignette studies have shown that anger in response to offences is specifically associated with inclinations to punish offenders directly, via physical and verbal confrontation [93,94,122]. By contrast, moral disgust in response to the same offences is associated with inclinations to punish offenders indirectly, via gossip and ostracism. These findings are corroborated by studies on punishment in daily life settings, showing that anger predicts both direct and indirect punishment responses, whereas disgust is specifically associated with gossip and ostracism [51]. One potential explanation for why disgust motivates gossip against offenders is that sharing information about norm violations can effectively recruit subsequent ostracism from the receivers against the targets of gossip [82,87]. --- Addressing current debates and carving future directions In the preceding sections, we have drawn distinctions between multiple direct and indirect tactics to intervene against offences-physical and verbal confrontation, gossip and ostracism. In what follows, we describe how these distinctions can help address ongoing debates regarding the prevalence and functions of punishment, as well as the reputational consequences of third-party intervention against offences. First, various empirical studies have cast doubt on the generalizability and ecological validity of laboratory findings regarding the prevalence and use of punishment. Experimental research has shown that punishment may only promote cooperation under certain favourable conditions, such as when its cost-to-fine ratio is low [123] or when retaliation is not possible [75,76]. Further, reviews of the ethnographic record [41,43], as well as recent survey studies [51,52], suggest that punishment-especially as commonly operationalized in laboratory experiments -is rarely observed in the field. Delineating between direct punishment and indirect reputation-based tactics can facilitate comparisons between the laboratory and the field and ensure that experimental findings can be generalized to equivalent realworld situations. Conversely, pinpointing which real-world intervention tactics are of empirical interest can inform decisions about how to operationalize punishment in the laboratory. Second, delineating between direct and indirect tactics can contribute to our understanding of when and how punishment serves cooperative versus competitive goals . Confrontational punishment is largely evoked by offences that harm oneself or close others [51,93,94,97]; it involves aggressive inclinations, such as anger and revenge motives [30,31,116,124], and it can lead to feuds [75,76]. Thus, while confrontational tactics may be favoured in the context of status competition, using them among peers is often discouraged and, in some cases, even proscribed to ensure harmony within communities [125]. In real-world settings, individuals often seem to prefer indirect, reputation-based tactics to deal with free-riding and other norm violations [50,51]. Gossip, in particular, may be preferred over confrontational punishment because it allows individuals to first communicate about norms of acceptable behaviour, and then coordinate their behaviour with others, thus lowering the costs of intervention and the risk of conflicts. Third, distinguishing between direct and indirect tactics can help address debates about the reputational consequences royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 376: 20200289 of third-party intervention against offences. That is because the reputational consequences of intervention seem to vary depending on the tactics that are used to impose costs on offenders. Experimental studies show that direct punishment, especially when imposed by third-party observers, effectively signals trustworthiness [126,127]. However, when other means of intervention are available , direct punishment loses some of its reliability as a signal of cooperativeness [128][129][130]. Further, other findings cast doubt on the idea that second-and thirdparty punishment signals trustworthiness and show that generous but not punitive individuals tend to be trusted more [131]. Especially when intervention takes the form of physical or verbal confrontation, it may even backfire, because confrontational individuals appear aggressive and are seen as motivated by selfish concerns [121,132]. More work is needed to understand how observers perceive and judge intervention via reputation-based tactics of gossip and ostracism 6 . This is an especially interesting avenue for future work because different societies may deem different ways of intervening against offences as more or less appropriate. A recent cross-cultural study of meta-norms has provided some initial evidence that the appropriateness of confrontation, gossip and ostracism differs across societies [125]. Undoubtedly, understanding the ecological and cultural origins of variation in such meta-norms is a fascinating puzzle to be addressed by future research on cooperation. Before closing, we turn to three additional recommendations for future research based on the work that we have reviewed here. First, our analysis suggests that the same intervention tactic can be used to achieve multiple purposes. To illustrate, gossip can be used for general deterrence and can also be used to facilitate partner choice. Similarly, ostracism can represent an effort to recalibrate someone's behaviour , but it is often used merely to navigate away from offenders and toward more valuable relationship partners. Future research on gossip and ostracism would benefit from studying when and how these reputation-based tactics are used for partner recalibration versus partner choice. Second, as noted earlier, third-party intervention in natural settings occurs within a rich relational context where the offender, the victim and the third-party observer have varying degrees of interdependence with each other. Different structures of interdependence may affect when and how third parties choose to intervene against offences [98,108,133]. For example, people may be more prone to intervene against offences that harm someone with whom they are mutually dependent [98], whereas they may be less prone to intervene against offences that harm someone they have conflicting interests with. Considering interdependence relations and the properties of the networks that people are embedded in is key to understanding thirdparty intervention. Likewise, our understanding of when people intervene against offences in ecologically valid situations can be ameliorated via the use of a variety of field methods. We believe that by revisiting the rich ethnographic record and by using novel, experience sampling techniques, future work can document a variety of intervention tactics in real-world settings and provide valuable insights into the factors that shape the use of confrontation, gossip and ostracism across social and cultural contexts. Data accessibility. This article has no additional data. Authors' contributions. C.M. and J.W. wrote and revised the manuscript. Competing interests. We declare we have no competing interests. Funding. C.M. acknowledges IAST funding from the French National Research Agency under grant no. ANR-17-EURE-0010 . J.W. acknowledges funding from the National Natural Science Foundation of China . Acknowledgements. We thank Jorge Peña for valuable comments on an earlier version of this manuscript. --- Endnotes 1 When describing the functions and mechanisms underlying direct and indirect tactics, we treat punishment as having potential longterm benefits for punishers. Whether and how punishment that involves fitness costs can evolve is debated. Addressing this debate is beyond the scope of the current review and we refer interested readers elsewhere [27][28][29]. 2 Because we focus on peer-imposed punishment, we do not consider in detail other direct punishment tactics, such as fine imposition and imprisonment. Although these punishments clearly impose direct costs on offenders, in terms of reducing their wealth or compromising their freedom, they are typically decided upon and implemented by formal authorities and their representatives. 3 It is worth noting that in this study the rate of direct punishment is much lower in situations that more closely resemble experimental tasks typically used to study punishment in the laboratory . 4 Relatedly, other work suggests that direct punishment is more effective at promoting cooperation when combined with communication [70,71]. 5 The relationship value of offenders might affect third-party punishment differently from what we suggest here . 6 On the one hand, gossip and ostracism can be used for prosocial purposes and may therefore have positive reputational consequences. On the other hand, gossip and ostracism can have deleterious effects on their targets and may therefore be negatively perceived by observers.
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portalIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Introduction The mere notion of a universal computing mechanism raises philosophical inquires about the ultimate feasibility of building machines of human-level intelligence [4,11,22,23]. One of the fathers of computability theory and the first to formalize the idea of universal computation [29], Turing began to ponder soon after his seminal paper about what means for a machine to be intelligent. His efforts culminated in the now famous Turing test [30] and the rise of the field of artificial intelligence. Concerns about the ethics and morality of computing machinery followed not long after, although also initially limited to the realm of science fiction. Acclaimed writer Isaac Asimov famously proposed his Three Laws of Robotics around the same period [2], on an effort to encode norms into artificial intelligence in such a way as to prevent the rise of malicious or adversarial machines and, even then, the generally black-box nature of how the norms were encoded into the machine's brain was used to imply that it could generate unpredictable behaviours. Following an initial period of optimism about the future of artificial intelligence when leading scientists, including Simon and Minsky [25,18] predicted that Artificial General Intelligence would be possible within the timespan of a generation, the field was struck by a wave of pessimism which lasted for decades and was later known as the AI winter [8]. During that period, ethical concerns about AI subsided inside the CS community, becoming more restricted to the worlds of science fiction writers, philosophers and social scientists. However, impressive machine learning results since the early 2010s are possibly turning this picture upside down faster than the computing community and the general public can cope with such changes, as pointed out by groups of experts from several leading AI countries [10,19,27]. In the timespan of half a decade, the world has seen machine learning applications progressively spread their roots into most aspects of our daily life, with smartphone intelligent personal assistants [26], targeted advertising in social networks [28] face recognition software [1] and self-driving cars [14]. This growing phenomenon potentially raises concerns about the possibility of securing our freedom and our privacy in the face of such an interconnected and intelligent ecosystem [19], as well as at which extent we can actually trust the many algorithms in the command of our daily relationship with technology not to manipulate us into making targeted decisions. Another pressing concern is the future of automation: will intelligent machines replace humans in the same way that automated machines took the jobs of craft workers following the industrial revolution? Moshe Vardi suggests the troubling observation that while automation is certainly eliminating traditional jobs, there is no evidence that emerging technologies create enough new jobs to compensate for those losses [31]. Famous technology entrepreneur Elon Musk has defended the notion of universal basic income as a possible solution for the difficulty in distributing the wealth produced by intelligent machines, a point raised by influential businessmen; Musk has also claimed that AI poses an "existential threat to humanity" [12,33]. However, calls for regulating AI are ofttimes motivated by the confusion between the implications of AI science and the hypotheses raised in science fiction, as explained, for instance in [12]. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems has identified four general principles that should "eventually serve to underpin and scaffold future norms and standards within a new framework of ethical governance": 1) human benefit 2) responsibility 3) transparency and 4) education and awareness [7]. As daily life faces increasing entanglement with information technology, it is up to AI researchers to provide safety guarantees to an increasingly anxious public. This paper contributes to both quantify and understand to which extent AI research has responded to such ethical concerns over the last decades. In particular, we are interested in how the voicing of ethical concerns by the AI community has evolved over time, and how well this process reflects the evolving demands of our society. The remainder of the paper is structured as follows. Next, we briefly introduce the main ethical concerns that have resulted from recent debates on ethics in AI. The topics raised in these works serve as basis for our analyses. We then describe our methodological research steps and analyse the results. Finally, we conclude and suggest further research directions. --- Background and Related Work There are a number of ethical concerns and resulting challenges of immediate relevance faced by AI researchers [23]. For instance, face recognition software has been on the rise in the last years, and is nowadays used from everything from organizing your digital photobook [15] to predicting criminal suspects [16]. The ethical validity of these technologies was brought in question by the recent discovery of the embarrassing phenomenon of machine bias: the process by which personal preconceptions of AI engineers can leak into projects in which they are involved. This delicate situation is perhaps best illustrated by instances of algorithmic racial bias such as Google Photos classifying dark-skinned people as gorillas [13] or intelligent programs suggested to be negatively biased against black prisoners [1]. Google's successful DeepMind team [24] has shown that machine learning based systems can achieve superhuman performance in the challenging domain of game playing, in which algorithms were trained by 'supervised learning from human experts' and 'reinforcement learning from self-play'. Rossi has pointed out that humans and machines will have to reach common agreement on collective decisions, either by consensus or negotiated compromises when acting in a common environment [21]. Researchers have also revived debates concerning the controversial field of physiognomy, with many people asking whether artificial intelligence even should try and classify people's sexual orientation according to their facial features [32]. Among the many challenges identified in [27] better transparency, interpretability and explainability of AI technologies [4,9] would lead to improved acceptance of AI technologies in society. In addition, in order to increase public confidence in AI, algorithms and systems must be made accountable; AI professionals are already seen to a certain extent as as responsible for their actions. 1As the prominence of artificial intelligence and particularly machine learning systems in our society rapidly increases, a large number of ethical concerns become pressing. Addressing these issues is a problem in itself, as the public awareness about the nature and operation of machine learning systems seems to be fairly limited. When inquired about the topic, as few as 9% of the participants declared having heard the topic "machine learning" and only 3% said they knew a great deal or fair amount about the field. By contrast, 76% had heard of computers that can recognize speech and answer questions and 89% had heard of at least one of the eight examples of machine learning used in the survey [27]. This possibly suggests that people are generally familiar with the applications of machine learning while ignoring the fundamental principles behind them. --- Ethical Concerns Impacting Artificial Intelligence and Machine Learning One of the oldest and most prominent concerns impacting automation is the replacement of human workforce by intelligent systems. This is a delicate topic, with people tending to disagree about where to draw the line concerning the adoption of robots in the workplace. On the one hand, people are content with robots replacing human workers in positions which could be considered harmful or dangerous, but at the same time the use of robots in personal or caring roles is viewed disfavorably due to the fear of losing human-to-human contact [6]. On a study conducted by the Royal Society, public opinions about automation by machine learning systems were also mixed [27]. On the positive side, people think that machine learning systems could be more objective than human users, helping to avoid cases of human error which arise when decision-makers are tired or emotionally vulnerable. They also believe that machine learning systems could be more accurate than human professionals, for example in conducting medical diagnoses. The perspective of automation bringing efficiency to the public sector is viewed favorably, as well as its potential to catalyze economic growth and tackle large scale societal challenges such as climate change. Nevertheless, people fear that machine learning can lead to physical harm to human beings, for example in accidents involving autonomous vehicles. The replacement of humans by machines in the workplace inspires fear of unemployment as well as of our over-reliance on them to make diagnoses. The issue of human replacement was raised spontaneously and frequently over the course of the study, suggesting that it is a sensitive matter for the public. The employment of ML in the automation of key services raises concerns about the effects of depersonalization and consumer misdirection. People feel that, lacking qualities such as human empathy and personal engagement, ML systems could have an effect on the depersonalization in the delivery of key services. There is the fear that ML-powered targeted ads could mislabel or inadvertently stereotype consumers, and that the prominence of ML in the Internet could create an algorithmic bubble which would filter challenging opinions, experiences or interactions [27]. Privacy is a sensitive and controversial topic, with people's levels of concern about data privacy generally varying according to the circumstances [27]. The issue is further complicated by the potential of ML to uncover sensitive relationships with limited data, as suggested by a PNAS study showing that a list of attributes including sexual orientation, ethnicity, religion, political views, intelligence and gender can be inferred from publicly accessible digital records such as Facebook likes [17]. The take-home lesson is that even if sensitive attributes are explicitly removed from the training data, the remaining attributes can still link to them. A recent concern is that of machine bias, which has received increasingly more attention as trained statistical models rapidly become the default in various applications. A number of studies has suggested that ML can fall victim to the same prejudices, stereotypes or biases possessed by their creators/programmers, with implications to racism and sexism in our society [13,1]. Intelligent systems which become negatively biased against minorities because of illdesigned training sets are bad enough, but we should also consider that even when machine learning uncovers a valid association, its use in recommendation systems may be controversial. In the age of autonomous vehicles, one of the most pressing concerns becomes that of accountability. If a self-driving car is involved in an accident, who should bear the blame? In a more general sense, who should be accountable when machine learning systems goes wrong? Many AI models effectively become black boxes upon training, and their methods and functioning become difficult to interpret -because the underlying algorithms of ML systems learn from training data, simply knowing the underlying program is different from knowing which features it will weight on the most. It is somewhat accepted that ML systems should be judged by their accuracy, and that ML systems which are more accurate than their human counterparts should be considered for replacement. But it could also be argued that if the decisions and predictions at hand have a significant impact, then understanding how they were computed is possibly more important than higher levels of accuracy. --- Methodology To achieve a measure of how much Ethics in AI is discussed we carried out extensive analyses of the mainstream AI venues. In our experiments, we search for ethics-related terms in the titles of papers in flagship AI, machine learning and robotics conferences and journals. The terms we search for were based on the issues exposed and identified in [3,5,27], and also on the topics called for discussion in the First AAAI/ACM Conference on AI, Ethics, and Society. The ethics keywords used were the following: Accountability, Accountable, Employment, Ethic, Ethical, Ethics, Fool, Fooled, Fooling, Humane, Humanity, Law, Machine bias, Moral, Morality, Privacy, Racism, Racist, Responsibility, Rights, Secure, Security, Sentience, Sentient, Society, Sustainability, Unemployment and Workforce. The list was larger, however, during a first analysis of the data we found out that some of the keywords that were to be used provided too many articles in which these words were used in ways unrelated to ethics in AI research. Some examples are the keywords control and controllable in Robotics venues: Since their use is generally attributed to the context of control systems, they should not be considered in the analyses and the keyword social, which generally was used as a part of "social networks". After the identification/discovery of these keywords we filtered the results further by manually removing papers with keyword matches whose context was not ethics-related. If we want to assess the level of attention or relevance given to ethical issues by the AI research community, it is necessary to have some form of baseline. With this in mind, we proposed two additional keyword sets encompassing classical AI terms such as reasoning, planning, learning, etc as well as trending topics such as convolution neural networks, deep learning, SLAM, etc. By comparing the evolution of the frequencies in which keywords from these three different categories match paper titles, one can gain insights about what the AI and robotics research communities have prioritized over time. The classical and trending keyword sets were compiled from the areas in the most cited book on AI by Russell and Norvig [22] and from curating terms from the keywords that appeared most frequently in paper titles over time in the venues. The keywords chosen for the classical keywords category were: Cognition, Cognitive, Constraint satisfaction, Game theoretic, Game theory, Heuristic search, Knowledge representation, Learning, Logic, Logical, Multiagent, Natural language, Optimization, Perception, Planning, Problem solving, Reasoning, Robot, Robotics, Robots, Scheduling, Uncertainty and Vision. The curated trending keywords were: Autonomous, Boltzmann machine, Convolutional networks, Deep learning, Deep networks, Long short term memory, Machine learning, Mapping, Navigation, Neural, Neural network, Reinforcement learning, Representation learning, Robotics, Self driving, Self-driving, Sensing, Slam, Supervised/Unsupervised learning and Unmanned. 2Since abstracts in text form were available for a smaller number of papers, as a way of validating that our results would remain true in the case that the corpora analysis was made wholly on abstracts, we observed the conditional probability that a word would appear on a title, given that it appears on a abstract on those papers that had textual abstracts available. This was done filtering stopwords, and was done for the set of keywords that are not ethicsrelated and for those that are -we call the first P K and P E . After running this we observed an P E bigger than P K , with P E = 11.53% and P K = 8.71%. The way this is put, one can say that if we count the occurrences only in titles, we can expect to under-sample ethics less than we under-sample the rest of the keywords; thus if we identify a gap where ethics keywords appear less in titles, this gap would be only intensified if we expanded to abstracts. A simple way to visualise this is that given a number of measured occurrences of ethics related keywords #E m and non-ethics related keywords #K m we can expect their true values #E t and #K t be in a relation like: #K m ≈ P k * #K t #E m ≈ P e * #E t Thus, if P e > P k one can expect that #K m /#E m > #K t /#E t -that is, the proportion of non-ethics related keywords would only increase if all abstracts were considered and the probabilities stayed the same. --- Experimental Analyses and Results The following statistics were computed on a dataset of a total of 110, 108 papers, encompassing 59, 352 conference and 50, 756 journal entries . The experiments and results summarized here are stratified into three groups: The AI group contains papers from the main Artificial Intelligence and Machine Learning conferences such as AAAI, IJCAI, ICML, NIPS and also from both the Artificial Intelligence Journal and the Journal of Artificial Intelligence Research . . From the statistics for each keyword we also compute the total number of matches, which is averaged over all samples. For example, the y-axis of Figure 1 corresponds to the average number of keyword matches throughout all publications of the same venue per five year interval. Figure 1 shows the evolution of keyword frequencies for some of the leading AI and Robotics conferences. While the trend for AAAI and IJCAI suggests a growing interest for ethics related themes by part of the AI community, the data for NIPS, ICML, ICRA and IROS is not conclusive. The scale of keyword frequencies, ranging up to 0.012 further suggests that ethical concerns receive little attention by these venues. Computing journals seem to devote more attention to these issues, with up to 0.08 of paper titles matches with ethics-related keywords as Figure 2 shows. When ethics-related keyword frequencies are compared with those of classical or trending AI terms, we get a possibly troubling picture. The supremacy of consecrated computing topics in these venues is to be expected, but Figure 3 shows the extent to which popular technologies such as deep learning, Boltzmann machines, convolutional networks, self driving cars, etc. overshadow the ethical concerns expressed on paper titles of the top AI conferences. The peak in the trending curve in the late 80s is explained by the neural network developments at that time, and one can see that the same terms are on the rise once again since the early 2010salthough unfortunately this is not accompanied by a substantial increase in ethical concerns. The data for robotics conferences shown in Figure 4 suggests an even larger gap between ethicsrelated topics and trending technologies. For AAAI and NIPS we were able to collect statistics about keyword frequencies in paper abstracts as well as their titles. Figure 5 compares the evolution in the frequency of ethicsrelated keyword matches for both conferences, once again suggesting that perhaps too little attention is devoted to these topics by two of the leading AI venues. Incorporating abstracts into our corpora yields almost no noticeable differences in match frequencies, with AAAI and NIPS frequencies peaking close to 0.01 and 0.004 respectively towards the end of the current decade. 3 and 4. Tables 2 and3 illustrate a more complete picture of the data collected and analyzed in this paper. Notice that some years in these tables have been removed due to the absence of keywords matches or papers in these years. --- Conclusions In this paper, we carried out an investigation of the long-term prominence of ethics related research in flagship AI venues. In order to do so, we performed corpora analyses on a large number of artificial intelligence, machine learning, and robotics top conferences and journals. The focus on ethical consequences and implications of AI has been in the field's research agenda since its dawn. However, specific interest on ethics-related research topics has not been consistent over the decades. The experiments identified a relatively low attention of the AI community with respect to ethical consequences of AI along the decades, as shown by our data analyses. One could argue that there have been seminars and smaller workshops on particular topics associated with Ethics in AI and related areas, which would contradict the low percentage and absolute numbers of ethics-related research papers in AI venues. However, our results show that over the last decades ethical issues have not been present at the main tracks of the flagship AI venues. Although workshops and smaller events may raise awareness among researchers and professionals, given the relevance and prominence AI technology has achieved in society, one can argue that ethics-related research should have perhaps dedicated tracks alongside the technical contents in the leading AI, machine learning and robotics venues. Even though the prospects of achieving artificial general intelligence and the singularity still seem far in the horizon, the ever expanding influence of intelligent systems in our society strongly suggests that ethics should be very much a present-day concern for AI research, and perhaps more so today than in any other point in the history of the field. In addition, the development of AI systems and tools raises several issues related to fairness, accountability [23] and justice [20]. As clearly identified by the experts in the Royal Society report [27], public concern about transparency, accountability and consequences of AI in general, and machine learning in particular require that both current and future researchers take into account the ethical consequences of their research. In this context, our work has contributed to not only identify the many faces of ethics in AI research over the years, but also has shown that current and flagship AI venues and researchers still dedicate a limited amount of their research focus to ethics in AI, machine learning and robotics. The identification of relevant research topics, or relative lack of attention thereof, opens several opportunities and challenges for the AI community, which will contribute to the development of accountable, sustainable and ethical systems and technologies with positive impact in human life and society. The societal demand for transparency and interpretability of AI systems also require increasing awareness of the research community. We believe this research contributes toward these aims, by providing experimental evidence of the historical evolution of ethics in AI research. --- Ethics Classical Trending
Recent developments in AI, Machine Learning and Robotics have raised concerns about the ethical consequences of both academic and industrial AI research. Leading academics, businessmen and politicians have voiced an increasing number of questions about the consequences of AI not only over people, but also on the large-scale consequences on the the future of work and employment, its social consequences and the sustainability of the planet. In this work, we analyse the use and the occurrence of ethics-related research in leading AI, machine learning and robotics venues. In order to do so we perform long term, historical corpus-based analyses on a large number of flagship conferences and journals. Our experiments identify the prominence of ethics-related terms in published papers and presents several statistics on related topics. Finally, this research provides quantitative evidence on the pressing ethical concerns of the AI community.
Introduction Social pedagogy has a long history in mainland Europe, but is less well known in the UK . In the last few years, various social services in the UK have piloted using social pedagogy as a way of improving practice, particularly in residential childcare. Cameron explained that the growth of social pedagogy in relation to residential childcare relates to a policy focus on improving quality of life matched with a relative lack of well-grounded theoretical approaches and qualifications for this work in the UK. Residential care or group care for children looked after by the state is most often used for older children and young people in the UK where kinship care or foster care is either unsuitable or unavailable; there has also been a decline in the use of residential care in the UK since the 1970s due to a preference for family based care and negative perceptions about the quality of residential care . More recently, in Scotland, social pedagogy training has been piloted with people with learning disabilities, based on the experience within Camphill Communities and supported by the Scottish Government strategy on improving quality of life for people with learning disabilities. In reviewing ten of these pilots in residential childcare, Cameron found the results were generally positive, although not always conclusive, and often the implementation of social pedagogy was challenging. We present a critical review of existing evaluations of social pedagogy in the UK, supplemented by insights from our experience as independent evaluators of a social pedagogy pilot, outlining the challenges and opportunities of evaluating social pedagogy. We argue that the main challenges relate to: defining social pedagogy; measuring the baseline prior to implementing social pedagogy; understanding individual and organisational change; measuring outcomes; and applying an appropriate approach for the evaluation. We conclude with recommendations for those intending to evaluate social pedagogy, and similar initiatives, in the future. Our critical review of existing evaluations is based on published reports and Cameron's recent review of the findings of social pedagogy evaluations, including unpublished literature. It is informed by a range of other literature on social pedagogy and evaluation research methods. We supplement this review with insights from our own experience as independent evaluators of a social pedagogy pilot for people with learning disabilities based at Camphill Communities in Scotland. 1 The full methodology and findings from our evaluation are available in the evaluation report . Our evaluation was based on an action research approach, which we discuss in the section on different approaches to evaluation below, and involved mixed-methods, including observation, qualitative focus groups and interviews, surveys, analysis of outcome data collected by staff, organisational documents and staff members' reflective diaries. --- The challenges of evaluating social pedagogy --- Defining social pedagogy One of the key challenges for evaluating social pedagogy is in defining what 'it' is. Hämäläinen explained that social pedagogy is an ambiguous concept, difficult to define, with varying theoretical conceptions and country-specific ways in which it is applied. 1 The Camphill movement originated in the 1940s in Scotland. It involves intentional communities where people live and work together, based on principles of mutuality, respect and learning . He stated: 'There is no unanimity on the nature of social pedagogy, no universal definition, no common theory and no uniform establishment for practice procedures.' ). Social pedagogy is often defined as 'education in its broadest sense' and, in relation to children, having a concern for their 'upbringing' ). People who practise social pedagogy take an educational approach . Hämäläinen explained that social pedagogy operates within, and on the borders between, social care and civic education. Smith and Whyte suggest that there is no single 'method' of social pedagogy, but at the core of social pedagogical practice is concern for the individual in their societal context and an emphasis on the transformative potential of education. The value base underpinning social pedagogy shares much in common with strengths based approaches and social work. Empowerment of individuals is a central theme, with the aim of nurturing individual and collective potential. Essentially, social pedagogy is holistic and humanistic, promoting social functioning and participation in society. Several commentators have explained that social pedagogy cannot be defined simply by what people 'do'. Petrie stated that 'social pedagogy is based on values: it is an ethical practice, not a technique ' and Cameron raised her concerns that social pedagogy was sometimes seen as a range of 'techniques' in the UK, rather than being treated as a profession. Eichsteller and Holthoff explain that the German concept of 'Haltung' is important for understanding social pedagogy; they translate this as 'ethos, mindset, or attitude'. They argue that social pedagogy is about 'being', that it is 'a skin rather than a jacket' , that it is an art form rather than a skill, that it is 'not so much about what is done, but more about how something is done' , and that it 'expresses an emotional connectedness to other people and profound respect for their human dignity' . Smith and Whyte describe this mindset as involving an ethical stance in relation to the 'other'. Although this provides some sense of the general meaning of social pedagogy, ambiguity regarding its exact definition makes it challenging to operationalise within an evaluation or know when it is being practised appropriately. If social pedagogy is understood as a 'mindset' or taking on certain values, then an evaluation could focus on ascertaining the values and mindset of practitioners before and after training, to determine whether they have taken on an appropriate mindset -specifically in relation to their ethical stance and respect for human dignity -and gauge the extent of change. People may change their values at a slower pace than they might change their practices, so these would need to be evaluated over a relatively long period of time . By way of comparison, Smith and Spitzmueller applied ethnography to examine the applied practices of milieu therapy, demonstrating that the ethos or 'mindset' of milieu therapy informed the way staff engaged with service users as well as allowing them to 'see' certain everyday practices as inherently therapeutic. Although researchers and theorists often define social pedagogy in terms of 'mindset', some also refer to particular concepts that are used in practice, such as the common third and the 3Ps . The 'common third' refers to those activities that are jointly shared between staff and service users, where the roles of expert and learner are reversed or both parties are learners, thereby involving greater equality in the interactions . The 'three Ps' refers to the private, personal and professional selves of the social pedagogue . This concept may help pedagogues to consider what aspects of themselves to share with others. The professional self involves theories and professional practices regarding others' behaviours that are routinely used in practice, and the personal self involves someone's personality and creative skills, which may also be brought into practice, whereas the private realm relates to those aspects that are only shared with close friends and family and are not appropriate or helpful to bring into practice . Social pedagogues are expected to use their 'head, heart and hands', meaning the 'intellectual, emotional and practical' aspects of their selves , so they draw on theoretical understandings of behaviour in their work, while also relating authentically to people, and being involved in practical arts, sporting, cultural or other activities. Practitioners are also expected to have a 'lifeworld orientation' , such that they work to understand people's experiences within their own world view, and their engagement takes place within the 'lifespace' . Furthermore, those who deliver social pedagogy training may include specific concepts and practices in their training, including the notions of 'learning zones ', the 'diamond model' and non-violent communication . The learning zone model suggests there are three zones: the comfort zone , the panic zone and the learning zone . The diamond model is based on the assumption that everyone has value and the potential to 'shine'. The model includes different dimensions of social pedagogical practice, specifically the emphasis on promoting well-being and happiness, holistic learning, positive experiences, the importance of positive relationships and the goal of empowerment . Non-violent communication is based on the work of Rosenberg , which involves empathetic and non-judgemental ways of relating to others, and recognising and clearly communicating needs and feelings. This suggests that social pedagogy is not only a mindset but also involves drawing on particular concepts in practice. Indeed, Hämäläinen suggests that social pedagogy in the UK has been particularly understood as relating to communication skills in educational settings. As such, evaluators should measure the extent to which practitioners apply these theories and concepts within practice, while being aware of Smith's comments that people's dispositions are of greater importance that their use of certain 'techniques'. When social pedagogy is introduced into a new setting, the service will already have a particular way of working that includes certain values and practices, albeit these may be implicit rather than explicit and they are likely to vary across individuals and across different settings within an organisation. These settings may include a strength based perspective and reflective practice -both of which are considered important aspects of social pedagogy -without necessarily being based on social pedagogy. Indeed, strength based approaches and reflective practice are encouraged within social work and social care . So if social pedagogy is introduced to a setting where strengths based perspectives and reflection are not happening, this could be seen as simply a case of poor practice, instances where training on, and the implementation of, these practices would make an improvement, regardless of whether they were 'social pedagogy' or not. This begs the question of whether social work in the UK is very different from social pedagogy. Perhaps 'good' social work might look a lot like social pedagogy? We could imagine social workers who appreciated the inherent good of every person, who engaged in a 'personal' manner, who applied theory in their practice, who used good communication skills, and who were reflective. Such people might be very effective at their jobs, and have many similarities in their practice compared with social pedagogues, while still clearly being social workers rather than social pedagogues. It seems to us that the best way of conceiving of social pedagogy is as 'praxis'; the bringing together of theory and practice . Perhaps there is a third aspect: the values. Social pedagogy might best be defined as holding the relevant values, using the relevant practices, and having an approach underpinned by the relevant theories. Several evaluations of social pedagogy, including ours, found that staff said they gained a language for describing practices they already used, and said they learned about theory to underpin their practices, without their practices changing . If evaluators are more specific in defining these different dimensions of social pedagogy -the values, theory and practices -this may help to get a better sense of the extent to which the training has changed these aspects, and how far away from or close to a social pedagogical 'ideal' the practitioners were at the start and end of the pilot. --- Measuring the baseline regarding practice In order to identify any changes that have occurred during the pilot, it is important to establish a baseline prior to any changes being implemented. This would also allow evaluators to get a sense of how compatible the organisation is with social pedagogy and how far along it is with its practice. This is essential for measuring the change that occurs during the initiative, and also provides helpful information about the context, which may be particularly useful when comparing social pedagogy initiatives that have taken place in different contexts. In this regard, a social pedagogy trainer told us that certain contexts may provide more 'fertile ground' for social pedagogy to 'grow'. If the context is too dissimilar from social pedagogy, then initiatives may fail, not because the practice in itself is ineffective, but because the people involved were unable to overcome the barriers to change, such as people being unwilling or unable to engage with social pedagogy, or organisational cultures that made it difficult to innovate or otherwise conflicted with social pedagogy. This issue is not necessarily specific to social pedagogy, but may come about wherever people are encouraged to change their ways of working . Previous evaluations have found that certain aspects of the context may make it more difficult for practitioners to implement and practise social pedagogy. These include negative attitudes towards social pedagogy; requiring social pedagogues to practise in line with existing policies and procedures; organisational problems, such as a lack of residents or funding; and risk averse policies and procedures . Conversely, researchers have also found that practitioners and managers may report that social pedagogy initiatives have had little effect because they claimed to already be practising in ways that resembled social pedagogy . By having a good understanding of the baseline, evaluators will be better able to measure the change that has occurred, and this should provide a better understanding regarding any changes or lack of change. Moreover, it may help when comparing across different contexts and evaluations, so that researchers and practitioners can get an idea of where and when social pedagogy initiatives might be most effective. --- Individual practice vs. organisational culture There have been different approaches to social pedagogy initiatives in the UK. One approach is to deliver training in social pedagogy to a number of practitioners within an organisation, who are then tasked with spreading this practice more widely , whereas another approach is to bring in social pedagogues from mainland Europe with the intention of encouraging their colleagues to take on a social pedagogical approach . Whatever the approach, the intention was to create change at the level of the organisation. So should the evaluation focus on individual or organisational change? Social pedagogy is often described as being about 'being' , which implies that people are the focus, therefore it would seem appropriate to focus on the way that individuals change their values, knowledge and practice. However, staff members do change , therefore evaluators should also establish the extent to which the 'organisation' is in line with social pedagogy and how this changes over time. Indeed Eichsteller and Holthoff argue that the successful implementation of social pedagogy requires change across the entire organisation, which they suggest can take years. Examining organisational change could be done through interviews with senior management, and the analysis of organisation documents , which Cameron, Petrie and Wigfall suggest are important for social pedagogy, identifying the extent to which the organisation appears to encourage or require values, knowledge or practice relating to social pedagogy. In the pilots, individuals were not only expected to practise in ways consistent with social pedagogy, but were also expected to influence practice more widely in their organisation, a much more challenging goal. Such change is most likely to succeed if key individuals are committed to the change, provide leadership, and guide staff through the process . Where social pedagogues were placed in residential units and expected to create change, several found it very difficult to encourage their colleagues to work in social pedagogical ways, and some gave up and left the organisation . In our evaluation, because we took an action research approach, it was important to support the learning experiences of the staff and the implementation of their organisational changes, mostly through engaging in dialogue with them, helping them to reflect on their situation, present them with the data we had collected and provide an 'outsider perspective' on the situation. The selective targeting of practitioners for training raises questions about the effectiveness of such training when rolled out to the wider workforce. For instance, practitioners told evaluators Vrouwenfelder, Milligan and Merrell that staff were 'cherry picked' to attend the training. As evaluators often focused on the effects of the training , this raises questions about the potential effects when the training is rolled out to other staff. If other staff are less open to social pedagogy, they may be less likely to take it on or apply it in their practice, which means it may be less effective . Relatedly, the application of the training in itself may affect the behaviour of staff -so called 'Hawthorne effects' -which means the apparent benefits may not hold over time or when rolled out more generally. Although, in saying this, we heard from practitioners in the pilot we evaluated that even those who were relatively resistant to the training at first became convinced over time . --- Measuring outcomes for service users Several of the evaluations of social pedagogy so far have focused on the process of implementing social pedagogy and / or the effects of social pedagogy training on practitioners . However, if the use of social pedagogy is to be worthwhile, then it should be demonstrated that it has a positive impact for service users . Therefore, evaluators should seek to measure relevant aspects of service users' quality of life before, during and after the pilot period to see what impact social pedagogy made in terms of the benefits for service users. In doing so, the measures ought to be both valid and manageable. In terms of the validity of the measures, they should measure aspects of people's lives that are relevant for them and make sense in the context of the service . These outcomes could be positive -in the sense that the service intends to increase them, such as self-esteem -or they could be negative -in the sense that the service intends to reduce their occurrence, such as violent behaviour. For instance, Berridge et al. measured positive outcomes such as school attendance, educational progress, effort and attainment, engagement in constructive activities, quality of contact with family, well-being, and negative outcomes such as 'behavioural problems', aggression, violence, involvement in crime, 'risky behaviour', going missing, drug/alcohol abuse, self-harm and school exclusion. Such a range of measures is useful for establishing the impact that social pedagogy might have. However, are all of these reasonable outcomes for social pedagogy? For instance, should social pedagogy be expected to improve quality of contact with families or reduce drug misuse? If so, how? Moreover, certain measures -e.g., quality of family contact, school exclusion and criminal justice responsesare related to systems beyond the service user or the service, and therefore these measures may have little or no relation to changes in staff practices. As argued by Preskill and Torres , the problem is not simply that organisations do not have enough data -often they have access to large amounts of data -but rather the nature or availability of the data does not match the needs of the organisation in terms of evaluating their work. Particularly in these cases, the evaluators should be clear about the potential connections between social pedagogy and the expected outcomes. In our evaluation, staff were expected to use the 'Outcomes that Matter' system to measure outcomes with the people they supported. Although several staff members did try to use the system in the early stages of the pilot, they found that completing the forms was very time consuming and they also told us that, in their view, the measure was not very suitable for the people they were supporting. In this regard, Cook and Miller suggest that measuring personal outcomes should involve measuring those things most important to the people involved and Miller highlights that outcome systems should not be too demanding on staff. The staff developed their own outcomes measurement tool which they perceived as better suited to the nature of the service, and which mostly involved qualitative comments on the social pedagogical approaches used and outcomes . As highlighted by Miller , 'personal outcomes' can be very specific to individuals, whereas the organisation or service will have its own outcomes that are seen as relevant across all service users. Using a system that places emphasis on outcomes that are particular to the individual service users makes it more difficult to establish the outcomes across groups of service users or at the level of the service. In particular, it does not make it easy to establish the baseline of needs or to measure change across time and it is difficult to turn the qualitative data into statistical measures that would give a sense of the 'amount' of change achieved or the number of service users who had benefited. So while the tool seemed to be an improvement, it was not well suited for making comparisons between the effects of using social pedagogy compared with the existing practice or other practices. Furthermore, because the tool specifically focuses on the use of social pedagogy, this means it would not make sense to use it in cases where social pedagogy is not being used, and indeed it excludes approaches that are not considered part of social pedagogy. In this regard it is worth noting that measuring quality of life outcomes for people with learning disabilities is complex . Defining 'success' may be particularly challenging when supporting people with long-term severe learning disabilities. For future evaluations of social pedagogy as used to support people with learning disabilities, it would be worth exploring the use of systematic measures that are designed and validated for use with this population, such as the Caregiver's Concerns-Quality of Life Scale . Service users' views on social pedagogy are obviously an important source of information regarding the effectiveness of this approach. Asking young people to express their views on a service is a sensitive and skilled thing for researchers to do . In our evaluation, this was even more challenging as many of the service users had severe learning disabilities and often were unable to speak. As highlighted by Gilbert , it is important to treat this challenge as relating to an inadequacy of the research methods commonly used, rather than treating this as a deficit among the service users. He suggests that this work involves time and experience, as well as potentially using creative methods to elicit the views of service users with learning disabilities. In our case, one of our team used an arts based activity -collage -to engage with the service users and get a sense of their views . In terms of other evaluations, Berridge et al. found it difficult to identify the impact of the pilot based on service users' feedback. Overall, then, service users' views make up a potentially important part of the picture, but they require skill to elicit and may not be definitive. --- Different approaches to evaluation When evaluating social initiatives, evaluators generally want to establish the impact that the initiative has had, particularly in terms of benefits for service users. To do this, evaluators need to gain an understanding of the counterfactual -that is, what would have happened if not for the initiative ? There are different ways evaluators can do this, and depending on their general approach to evaluation, they may put more or less emphasis on this in comparison to other aims of the evaluation. For example, different evaluators may place greater or less emphasis on experimental, action research and / or logic modelling approaches. For experimental approaches, evaluators emphasise the importance of controlling the application of the independent variable; that is, the initiative under scrutiny, in this case social pedagogy. Some researchers advocate randomly allocating service users to the initiative, so that any difference in outcomes could be attributed to the initiative . Setting up and running a randomised control trial is challenging, resource intensive, has practical challenges , and may also be critiqued on theoretical grounds . Some attempts at randomised control trials in children's services have struggled to recruit sufficient participants to draw strong conclusions about the effectiveness of the intervention and studies on social pedagogy so far have tended to have relatively low sample sizes . Randomised control trials also require clear, quantifiable measures of outcomes and a reliable baseline , the challenges of which were discussed above. Moreover, it requires keeping the experimental conditions separate from the control conditions. As highlighted by Smith and Skinner , knowledge of social pedagogy may spread to comparison conditions, even if not intended, and conversely the experimental conditions may be affected when social pedagogues are required to operate in ways consistent with the organisational context, even when they go against social pedagogical practices . Because the conditions necessary for a randomised control trial are so difficult to establish, evaluators who are interested in using a broadly experimental approach are more likely to use a quasi-experimental approach -that is, one where the conditions are not randomly allocated, but nevertheless there are different conditions that may be compared . For instance, Berridge et al. grouped the houses within a service into four conditions, three of which involved social pedagogy to some extent, although introduced in slightly different ways, and the fourth of which did not involve social pedagogy and functioned as a control group. This approach allowed the evaluators to compare social pedagogy against a control to see what difference it made, and had the added benefit of allowing the different methods of implementing social pedagogy to be compared. However, the number of residents in each condition was relatively small , meaning that even with an overall sample size of 114 residents, it would difficult for the evaluators to draw strong conclusions about the impact of the pilot. Indeed, the findings were not statistically significant, meaning that the outcomes from the sites using social pedagogy were no better than the group that was not using social pedagogy. However, the evaluators highlighted a range of issues in terms of implementing social pedagogy and suggested that more time was needed, which could have allowed a comparison of outcomes from before implementation with outcomes after successful implementation . This would provide stronger conclusions regarding the impact of social pedagogy as such, otherwise it is only a comparison of existing practice with an incomplete or failed attempt to implement social pedagogy. Action research provides an alternative approach to experimental designs. When taking this approach, evaluators try to find out how an initiative is developing, gather data on processes , and discuss this information with the relevant stakeholders . We used this approach in our evaluation as did Smith and Skinner . In particular, we met with the staff at key points over the course of the pilot, including: 1) at the start of the evaluation to discuss the purpose of the evaluation and how they would like it to proceed; 2) early in the pilot period to discuss the results of the baseline report; 3) once the social pedagogy training was completed to discuss their action plans for implementation; 4) midway through the pilot to discuss an interim report on their progress; and 5) at the end of the pilot to discuss a draft final report. These 'feedback loops' were intended to be of use to both us and the staff. For us, it ensured we had a good understanding of the services and helped to check the accuracy of our findings and conclusions. For the staff, it helped provide them with an independent perspective on their progress and pick up on issues that may affect their implementation of social pedagogy. In its 'pure form', these two approaches are mutually exclusive. This is because providing feedback in the way that an action research approach involves might affect the conditions in an experimental design. The key exception to this is where the evaluators discover that the initiative is not being delivered as intended, and the feedback would help to get the practitioners back on track . However, because the ideal conditions for an experimental study are unlikely to be met anyway, using an action research approach may help to address some of the limitations of an experimental design, particularly through helping to address implementation issues. Logic modelling is an approach evaluators can use to help specify the mechanisms of the initiative and their links to potential outcomes . For instance, Smith and Skinner applied a logic model approach in their evaluation, outlining the key aspects of the initiative, including: inputs, activities, outputs, outcomes and impact. Specifying the different aspects of the initiative in this way may help to gain a better understanding of how the social pedagogy initiative is actually operating. Logic modelling is compatible with both experimental and action research approaches. It may benefit evaluations based on an experimental approach by bringing greater attention to the mechanisms for change, rather than having too much focus on the outcomes , and for action research approaches it may be useful for helping staff to reflect on their activities and how these might link to potential outcomes. --- Discussion and recommendations We have identified some key challenges for evaluating Social Pedagogy, notably defining terms, measuring the baseline, clarifying the unit of analysis, measuring outcomes, and deciding on an appropriate evaluation approach. We recommend that evaluators approach social pedagogy as praxis , considering three dimensions: values, knowledge and practices. Carpenter has outlined some of the ways in which social work education may be measured using surveys, vignettes and observations, and these approaches would be relevant for evaluating social pedagogy. Practitioners could be asked to respond to vignettes -as did Cameron -which can be rated by trained practitioners to establish how practitioners might respond in a given situation. Researchers, trainers and social pedagogues could work collaboratively to measure this throughout pilot initiatives to identify change over time. This could help establish how receptive an organisation may be to social pedagogy. This should be done before the initiative has begun and everyone involved should have a realistic sense of what change ought to be achieved over the period of the evaluation . This should be combined with a structured way of analysing organisational documents to identify change at both the individual and organisational levels. Evaluators should gather measures of service user outcomes using valid tools that are relevant for the service context. In this regard, evaluators and staff may have different views on the uses of these tools and it may help to have discussions around their different meritsparticularly the difference between personal outcomes and outcomes at the level of the service . It would help to look at validated tools and find a way to record outcomes that allows comparison , provides meaningful information to service users and practitioners, and is well integrated with practices . Evaluators ought to be clear about the evaluation approach they are using when evaluating social pedagogy. They can place greater emphasis on the comparison aspect of the research, particularly where trying to have control conditions and make strong claims about causality, or greater emphasis on the action research aspects, particularly in terms of learning and feedback loops. Ideally, researchers can achieve both of these things, making use of comparison conditions as well as sharing ongoing information with staff and gathering their views. Due to the methodological challenges discussed above, the ideal conditions for experimental research are likely to be absent, which means it may always be difficult to draw strong conclusions about the effectiveness of social pedagogy. Using logic modelling -which involves mapping out all aspects of the intervention -may help evaluators and practitioners to get a better sense of which activities led to certain outcomes . Perhaps social pedagogy evaluations would be best not to claim to be evaluations of 'social pedagogy' as such. This may risk the conclusion that social pedagogy does or does not work. Rather the evaluations are evaluations of these practices, based on these principles, by these people, with these people, in these contexts. This may frustrating for some, who might wish to have a clear steer in terms of whether social pedagogy is the way to go. However, it might also encourage people to pay greater attention to the specific practices under discussion, with consideration of the people involved , and the contexts that are involved, as well as the scope for change over time. This may be more accurate, given, as argued by Cameron , social pedagogy should be understood as a profession rather than a set of techniques, and pilot initiatives tend to involve developing practices in the direction of social pedagogy, rather than the wholesale replacement of current practice with social pedagogy as such. Moreover, further research should be undertaken to clarify the exact nature and definition of social pedagogy, particularly in relation to the notion of 'mindset'. There are a number of issues that we have not discussed, particularly regarding the context of introducing social pedagogy. For instance, in some evaluations the context of austerity, job uncertainties, funding cuts or other organisational changes affected the service and pilot, distorting results. Some evaluators also highlighted tensions between social pedagogues and other staff, particularly around professional approaches and rates of pay . Sometimes there were cultural or language issues with introducing social pedagogues from different countries, and social pedagogues were often younger than other staff and had less experience of working in residential child care . Any evaluation of social pedagogy needs to pay attention to the role that factors such as these will play regarding the effectiveness of any initiatives. Finally, it may be provocative to say it, but perhaps the best way to evaluate social pedagogy is for evaluators to become like social pedagogues. For example, the evaluation is a 'common third': evaluators and staff learning about social pedagogy and its effects together. Evaluators could use their 'head, heart and hands' to draw on and analyse information regarding the effects of social pedagogy, connect with staff and service users on an emotional level to understand what their world involves, and take part in practical activities with staff and service users to engage in their 'life world' and learn about their practice and experiences. Creating greater congruence between the evaluation methods and the principles of social pedagogy may be the best way to ensure greater understanding between evaluators and practitioners and ultimately to produce fair evidence of its relevance and effectiveness.
In recent years, various social services in the UK have piloted using social pedagogy -a broadly education-based approach to bringing about social change originating in mainland Europe -as a way of improving practice, particularly in residential childcare. Pilot evaluations of initiatives to introduce social pedagogy to children's services have produced generally positive results, although the evidence remains modest and the studies are affected by a range of methodological limitations. In this article, we critically review existing evaluations, supplemented by insights from our experience as independent evaluators for a social pedagogy pilot for services supporting people with learning disabilities, to present an account of the challenges and opportunities of evaluating social pedagogy in the UK. We argue that some of the main challenges relate to: defining social pedagogy; measuring the baseline prior to implementing social pedagogy training; understanding individual and organisational change; measuring outcomes; and applying an appropriate approach for the evaluation. We conclude with recommendations for those intending to evaluate social pedagogy, and similar initiatives, in the future.
INTODUCTION Higher levels of social support have been associated with better health [1][2][3][4]. Prospective longitudinal studies have demonstrated that both structural aspects and functional aspects of social support are associated with better self-reported health [5,6], and lower levels of depression [7], coronary heart disease [8], and mortality [9][10][11]. Thus, there is clear evidence supporting an overall positive influence of social support on health. There have, however, been relatively few studies that have examined the association of social support with mental and physical health by considering the adult life course. To date, most studies have relied on the analysis of two measurement points and have not taken into account of possible changes in the association between social support and health with age. The size of social networks and the amount of received social support tend to vary over life course [12], although it remains unclear whether the association between social support and health strengthens, remains stable, or decreases through the life course [13]. This issue is of importance in the context of aging as changes in social network are common in later life. There is, for example, some evidence to suggest that growth of social network is associated with better self-reported health in older adults [14], but few systematic investigations on this issue are currently available. It is also possible that the association between social support and health is bidirectional; that is, as well as social support may affect health, health may have an impact on access to social support or it may affect possibilities to benefit from social support . Bidirectional associations could lead to vicious cycle where poor health contributes to the loss of social support over time. Lastly, few studies have examined whether structural and functional aspects of social support are associated with health equally strongly and whether structural support shapes the effect of functional support. A better understanding of directionality and major aspects of social support in terms of health would contribute to designing of health promoting interventions. In the current study we examined associations between structural and functional aspects of social support and future mental and physical health from an adult life course perspective using data from the Whitehall II Study [15,16]. To examine possible bidirectional effects we determined whether self-rated mental and physical health was associated with future structural and functional aspects of social support. As previous studies have shown that social support is at least partially socioeconomically patterned [9], we controlled for the effect of socioeconomic status. --- METHODS --- Study sample Participants were from the ongoing Whitehall II Study [15,16], which originally included 10,308 London-based civil servants from 20 civil service departments who were 35-55 years of age at study baseline ). Data from baseline and seven follow-up phases were used in the current study. All participants who provided data at the baseline and at the first follow-up phase 2 , and at any subsequent follow-up phases 3 to 8 were included. From phases 3, 4, 5, 6, 7 and 8, data were available for 6783, 6094, 5614, 5359, 5330, and 5353 participants respectively. Ethical approval for the Whitehall II study was obtained from the University College London Medical School Committee on the ethics of human research. Informed consent was obtained from the study participants. --- Measures --- Structural measures of social support Self-reported social network, which was available from phases 2, 5 and 7, and marital status were used as structural measures of social support. Social network score was obtained from questions 1) on the monthly frequency of contacts with relatives, friends, and colleagues and the frequency of participation in social or religious activities and 2) on the total number of relatives or friends seen once a month or more. The scaled responses were then summed together. Marital status was dichotomized as 1=married/cohabiting; 0=never married, separated, divorced, or widowed. --- Functional measures of social support The following three functional measures of social support were assessed at phases 2, 5 and 7 using the Close Persons Questionnaire [17]: confiding support, practical support, and negative aspects of close relationships. Confiding support measures included wanting to confide, confiding, sharing interests, boosting self-esteem, and reciprocity. Practical support included measures of practical help received, whereas negative aspects of close relationships measured adverse exchanges and conflicts in relationships. Items were rated on a 4-point Likert scale, with higher scores indicating greater negative or positive support. --- Self-reported mental and physical health Mental health and physical health were assessed using the self-administered Short Form Medical Outcomes Survey questionnaires [18,19]. Two subscales, mental health and physical functioning, which represent the main SF-36 subscales for mental and physical health were used [18]. The mental health subscale includes 5 items assessing aspects of mental well-being , and the physical health subscale includes 10 items assessing the ability to carry out daily activities . Cubic transformation was used to transform the negatively skewed mental and physical health scales, and both scales were then transformed into t-scores and higher scores indicated better health). --- Covariates Age, ethnicity , and socioeconomic status measured as employment grade were reported at the study baseline and were used as covariates in all analyses. Employment grade was based on the participants self-reported civil service grade title [16,20], which was then grouped into three grade categories using the civil service employment grade classification. Employment grade has been shown to be a broad marker of socioeconomic status as it has been associated with salary, educational level, and the level of responsibility at work [16,21]. --- Statistical analysis All analyses were conducted separately for men and women. The association between social support with mental health and physical health trajectories was examined using longitudinal multilevel regression analyses with random intercept [22,23]. Repeated measurements were arranged into a multilevel format in which measurements were nested within participants, i.e., the same participants contributed more than one observation in the dataset. Previous studies using the Whitehall II study data have shown that both mental and physical health have a non-linear relationship with age [24,25]. Thus, a restricted cubic spline function with five knots was used to model the relationship between mental and physical health with age. To examine whether the association between social support and health strengthened or weakened with age, an interaction term between social support and the first spline age variable was introduced. Three separate analyses were used to examine the association between measures of social support with mental and physical health. First, measures of social support from phase 2 were used to predict mental and physical health in phases 3 to 8. Second, to further examine the longitudinal associations, social support as a time-dependent exposure from phases 2, 5, and 7 was used to predict mental and physical health in phases 3, 6, and 8. This enabled the examination of variation with time in the association of social support with health. Last, to examine potential bidirectional effects, mental and physical health as time-dependent exposures from phases 4 and 6 were used to predict social support at phases 5 and 7. We repeated the longitudinal regression analyses between measures social of support with mental and physical health using withinindividual analysis to minimize potential confounding arising from unmeasured stable confounders. In addition to covariates, i.e., age, ethnicity, and employment grade, all analyses were adjusted for the effect of measurement period, i.e., study phase. All statistical analyses were performed using Stata 13.1 . --- RESULTS Descriptive of the study sample are shown in Table 1. When compared to the original sample, participants included in the study sample were more likely to be white , women , slightly younger and from a higher employment position . In addition, individuals who responded at phase 5 had higher social network score , were more likely to be married or cohabiting , and had higher levels of emotional support and practical support than those who dropped out from the study. Expect for emotional support, similar patterns were also observed between those who responded at phase 7 versus those who dropped out from the study. Associations between measures of social support at phase 2 with mental and physical health trajectories are shown in Table 2. Higher social network score, being married or cohabiting, and higher levels of emotional support were associated with better mental health in both sexes. Higher levels of negative aspects in close relationships were in turn associated with poorer mental health. In addition, higher levels of practical support was associated with a better mental health in men. Higher levels of negative aspects in close relationships was associated with poorer physical health in both sexes, whereas higher levels of emotional support was associated with better physical health among men and higher levels of practical support was associated with poorer physical health among women. To examine whether the association between social support and health strengthens, remains stable, or decreases through the life course, we ran interaction analyses (for results see: Supplemental Tables 1 and2). In men, interaction between age and practical support suggested that the association between practical support and mental health weakened with age, whereas the association between practical support and physical health strengthened with age. In women, the association between emotional support and mental health was found to weaken with age. No other interaction effects between social support with mental and physical health were found. Prospective longitudinal associations between repeated measures of social support and mental and physical health are shown in Table 3. Higher social network score, higher levels of emotional and practical support, and lower levels of negative aspects in close relationships predicted better mental health in both sexes. Being married or cohabiting predicted better mental health only among men. Lower practical support and lower levels of negative aspects in close relationships were associated with better physical health in both sexes. However, only the associations between emotional support, practical support, and negative aspects in close relationships with mental health in men and the association between practical support and physical health remained statistically significant in within-individual analyses. Interactions between age and social support predicting mental and physical health are shown in Supplemental Tables 3 and4. In women, the significant interaction between age and marital status predicting mental health indicated that the association between marital status and mental health weakened over the adult life course. There was also some indication that mental health in married women begun to decline after the age of 60 years, while no such effect was found for single women. Interactions were also observed in both sexes in the association between negative aspects of close relationships and physical health; these associations strengthened over the adult life course. Interactions are illustrated in Figure 1. Interaction analyses between structural and functional support in predicting mental and physical health are shown in Supplement Tables 5-8. Out of the 24 interactions analyses, there was some evidence that high social network size buffered the effect of emotional support and negative aspects of close relationships on mental health for men. No other consistent interaction effects were found. Result of the possible bidirectional effects, i.e., associations of mental and physical health with future structural and functional aspects of social support, are shown in Table 4. Better mental health predicted higher social network score, higher levels of emotional and practical support, and lower levels of negative aspects in close relationships in both sexes. Better physical health, in turn, predicted lower practical support and lower levels of negative aspects in social relationships in both sexes. In addition, better physical health predicted higher emotional support in men. However, only the associations between marital status and physical health remained significant in the within-individual analysis. --- DISCUSSION Current study results demonstrate that there is a bidirectional association between social support and health, and that the strength of this association can vary over the adult life course. These findings, which are based on British occupational cohort, highlight the importance of examining the role of social support over the adult life course life course. Current results are in line with previous studies showing that social support is important for mental and physical health. The role of social support in mental health has been highlighted [26], and whereas functional support was only associated with physical health, both aspects of social support clearly contributed to mental health over the adult life course. Whereas positive effects of marriage have been demonstrated in numerous studies [27,28], here, being married or cohabiting was associated only with mental, but not with physical, health. Although being married or cohabiting was somewhat stronger predictor in men than in women, these results were, however, partly explained by the finding that the effect of marriage or cohabiting varied across life course in women, but not in men. In addition, we found evidence that for men some effects of functional support were buffered by structural support, indicating that structural support could shape the effect of functional support on mental health. Our finding that higher levels of practical support was associated with poorer physical health, is likely explained by the fact that people need more practical support when they become sick . Practical support can also have a preventive effect on physical disease, as help with every-day functions may promote healthier lifestyles protective of physical disease. Negative aspects of social relationships were more strongly associated with both physical and mental health than positive social support. These results are in accordance with some previous studies showing that negative, but not positive, aspects of social support are associated with health [29][30][31]. Negative aspects of social relationships could be especially harmful as they can be a source of stress, and thus lead to chronic strain [32]. A novel finding of the current study is that the strength of social support in health can differ across the adult life course. Whereas the association between marital status and mental health in women was found to weaken with age, the role of negative aspects of close relationships in physical health strengthened gradually over the adult life course. The role of social support over the whole life course has also been emphasized previously [13], although most of these studies have focused on specific stages of the life course. For example, higher social support has been shown to predict better mental health in adolescence [33], in middle age [5], and in older adulthood [34]. However, social support might be of more importance among those who are more vulnerable to loss of health, such as older adults. There is also some evidence that low social support predicts faster cognitive decline [35], and as cognitive functioning is strongly linked to ability to maintain independence this might explain the amplifying effect of functional support at older ages. We obtained evidence of bidirectional association between social support and mental health; both functional and structural aspects of social support were associated with future mental health and vice versa. Similar bidirectional association was observed also between measures of social support and physical health, although the founded effect sizes were considerable smaller. It has been previously noted that there is a lack of studies that have examined potential bidirectional effects between social support and health [36]. However, it is likely that the association from health to social support is considerably weaker than the association from social support to health. This is also supported by numerous studies showing the predictive strength of social support on health [4,26]. However, our findings are of importance as they demonstrate that reverse causality, for example health may have an impact on availability of social support, should be taken into account in the studies of social support and health. Only the associations between functional support and mental health, practical support and health, and marital status with mental health were not confounded by unmeasured variables. This indicates that the discovered associations reflect mostly associations between individuals. It is possible, for example, that individuals with high social support may have better mental health across measurement times, but mental health of each person might not change as the person's social support changes from one measurement time to another. A number of mechanisms, not directly measured here, are likely to explain the observed associations [36]. A number of psychosocial mechanisms such as social comparison are likely to explain why functional aspects of social support lead to better health [37]. Social support has also been shown to buffer against stressful lifeevents [38], indicating that some beneficial effects of social support are likely resulted from better coping with difficult situations. With regard to biological mechanisms, neuroendocrine changes triggered by poor social support have been supported in studies using humans as well as animals [39], and social support has been also associated with changes in the immune systems [40]. This study has some notable strengths. Longitudinal data from over 20 years with repeated measures made it possible to examine mental and physical health trajectories. We were able to examine both structural and functional aspects of social support and potential bidirectional effects. When interpreting current findings, some limitations need to be taken into account. All measurements were based on self-reported data, which can create bias due to common method variance [41]. For example, it is possible individuals with poor mental health assess their level of social support differently from individuals with better mental health. Another potential limitation is that the measures of social support were only available from three phases, which could increase statistical error. As current study participants are mainly London-based white-collar civil servants, results may not be representative of general British population. In addition, most women in the Whitehall II Study were from the lower occupational grades, thus the results for women might not be generalizable to the general population of working women. Also, in a previous study using Whitehall II study data higher probability of attrition has been associated with poorer mental health and poorer physical health [25]. In our analyses, individuals who continued at the study had more structural and functional support than those who dropped out from the study. This canif anything both inflate or attenuate observed associations. To conclude, this study demonstrates that there is a bidirectional association between social support with mental and physical health, and that the association between social support and health may change over the adult life course. Future studies should address the mechanisms explaining the varying association of social support with health over the life course. --- What is already known on this subject? The positive effects of social support are well known and higher social support has been associated with better mental and physical health. However, only few studies have examined the association of social support with health from the adult life course perspective. --- What this study adds? Current results show that there is a bidirectional association between social support and health. In addition, the association between social support and health was shown to vary over the life course. These findings highlight the importance of social support in public health over the adult life course.
Background: Social support is associated with better health. However, only limited number of studies has examined the association of social support with health from the adult life course perspective and whether this association is bidirectional. Methods: Participants (n=6797; 30% women; age range from 40 to 77 years) who were followed from 1989 (phase 2) to 2006 (phase 8) were selected from the ongoing Whitehall II Study. Structural and functional social support was measured at follow-up phases 2, 5, and 7. Mental and physical health was measured at five consecutive follow-up phases (3 to 8). Results: Social support predicted better mental health, and certain functional aspects of social support, such as higher practical support and higher levels of negative aspects in social relationships, predicted poorer physical health. The association between negative aspects of close relationships and physical health was found to strengthen over the adult life course. In women, the association between marital status and mental health weakened until to the age of approximately 60 years. Better mental and physical health was associated with higher future social support.The strength of the association between social support and health may vary over the adult life course. The association with health seems to be bidirectional.
Background India's maternal mortality ratio declined by approximately 59% between 1990 and 2012, from an estimated 437 per 100,000 live births to 178 per 100,000 live births. Despite the sharp pace of decline in the period between 2006-2012, the country will fall short of meeting the Millennium Development Goals 5 on improving maternal health by 2015 [1,2]. While the trend in under-five mortality rate appears consistent with the MDG 4 target on reducing child mortality, the increase of the share of neonatal deaths in under-five deaths from 41% to 55% between 1990 and 2012, remain a cause of concern [2,3]. Equally important are the wide geographical disparities that persist in India's maternal, newborn and child health indicators. The most recent estimates suggest that MMR varied from as low as 66 per 100,000 live births in the State of Kerala to 235 in Odisha and 328 in Assam. Further, under-five mortality remains more than 80% higher in rural India than in urban areas [1,4,5]. In response to persistently poor levels of maternal and child health in rural India, the National Rural Health Mission was launched in 2005 as a framework for the provision of accessible, affordable and quality health care in deprived and underserved communities in rural areas [6][7][8]. At the center of the Reproductive and Child Health Program which is run under the umbrella of NRHM, are the Accredited Social Health Activists , local women trained as health educators and promoters to generate demand for, and facilitate access to MNCH care in their communities. Other related initiatives promoted under NRHM include the management of malnutrition through Village Health and Nutrition Days , and the system of free transport of pregnant women to health facilities through the Janani Express Yojana program [9,10]. The recent increase in institutional delivery in India has been largely attributed to the introduction of ASHAs [11,12]. Acknowledging the central role of men in women's reproductive health, the RCH program includes the training of health workers to provide husbands of expectant women with information on MNCH care and family planning [4,6]. --- Male engagement in MNCH: rationale and implications In India and other parts of the developing world, gender-based power inequalities in reproductive health decision-making have been acknowledged as a fundamental constraint to women's access to reproductive health services, and ultimately, a barrier to improved health outcomes [13][14][15]. In these settings, women are largely dependent on their husbands for health-related decisions, making the behavior, knowledge and attitudes of men an integral element of the reproductive health status of the family [15,16]. Men also resort to making decisions about their wives' health care as a consequence of women's structural and cultural dependence on men, due to limited mobility and limited educational and economic opportunities for women [17]. According to India's 2005/06 National Family Health Survey, the main reason pregnant women did not make antenatal care visits or did not deliver in a health facility was that their husbands did not think it was necessary or did not allow them to do so. Nationally, only 40% of pregnant women attend ANC visits and only 45% deliver under the supervision of a skilled health personnel. The report concluded that men's participation in maternal health care should be strengthened, and the information provided to men more comprehensive [18]. Another study on the same topic argues that the formulation of programmatic and policy interventions related to increased male involvement in women's health is still in its infancy, partly due to mixed findings from existing research [19]. The program of action developed at the 1994 International Conference on Population and Development emphasized the need for equity in gender relations, especially men's shared responsibility and active involvement to promote reproductive and sexual health [16,20]. Even prior to this development, there was a recognition that men constituted an important, yet untapped resource in efforts to improve the health of women and children [19], and through abuse or neglect, their actions had direct consequences on the health of their wives and children [21,22]. Overall, husbands' social support and perceived social norms were identified as underlying factors associated with delivery care utilization [15,23]. As a result, there was a paradigm shift after the ICPD meeting from "men as clients" to "men as partners." The former concept entails addressing men's reproductive health needs, while the latter emphasizes the central role men play in supporting women's health, and implies recruiting men and raising their awareness about danger signs in labor, transportation plans, the benefits of family planning for women's health, among other topics [20]. Many studies have shown that well-designed male involvement programs have the potential to generate changes in men's attitudes and behaviors [17,21,22]. However, a number of obstacles to male involvement in maternal and child health have been identified, the most critical of which is the poor knowledge of husbands and other members of the family on the do's and don'ts during pregnancy, child birth and the postpartum period. Other frequently cited barriers include social stigma, shyness and embarrassment, work obligations, and poor communication between husbands and wives [24,25]. Even when Indian men have positive attitudes towards MNCH care, the family environment characterized by the interference of mothers-in-law [14,15,19], may not favor the provision of care to pregnant women, compromising access to good quality home-based or facility-based care [14,17,19]. Recommended strategies to address these barriers include engaging in communication on appropriate care during pregnancy, child birth and the postpartum period, as well as targeting and equipping men on appropriate homebased care, preventive care and danger signs through effective counseling [21,22]. The role of community health workers to counsel and provide information on reproductive and child health is well recognized. In the context of rural India, the ASHA, Anganwadi Workers , and Auxiliary Nurse Midwives are women who initiate and maintain a dialogue with mothers and other women in the community, provide health information and facilitate referrals to health facilities [26,27]. Figure 1 presents a brief description of AWWs and ANMs. Studies conducted in Odisha State have revealed that ASHAs and AWWs are the primary providers of health and nutrition information in the community; they have strong credibility with community members, but are poor at interpersonal communication and counseling [28,29]. Being female, their reach to the male members of the community is limited, as evidenced by the evaluation of the ASHA program conducted in eight states of India [30]. The aim of this study is to describe the influence of a male engagement project on the utilization and community-based delivery of MNCH care in a rural district of India. Specifically, the research questions guiding the analysis are: To what extent did male CHWs complement the work of their female counterparts and fill important gaps in community MNCH service delivery? What is the perceived influence in the community of male CHWs' engagement with men on the utilization of MNCH services? --- Methods The Male Health Activists project: overview The Male Health Activists project, implemented in the district of Keonjhar in the State of Odisha, was designed to address some of the challenges ASHAs face in delivering their services, in particular encouraging men to take a more active role in the health of mothers and children. The project recruited and trained male community health workers known as Male Health Activists to complement the work of ASHAs and target outreach to men as a way to extend community-based delivery of health services for women, neonates and children. The aim of the project was to improve the coverage of MNCH services delivered by the formal health care system, and improve home-based management of MNCH and care-seeking for prevention and treatment services. The project's theory of chance is depicted in Figure 2. The State of Odisha is among the six states with the highest rates of maternal and child deaths. Approximately 37% of Odisha's population live below the poverty line. Keonjhar district, the location of the intervention, has some of the worst MNCH indicators and ranks 24 th out of the 30 districts in the State on the Human Development Index [31]. The district is one of the 250 districts in the country receiving special funds for underdevelopment status. In terms of health outcomes, 12 out of the 13 sub-district administrative areas of Keonjhar are classified by the NHRM as 'high focus' due to poor MNCH performance, with two of these considered by the government as the most difficult in terms accessibility and low service utilization. The MHA project was implemented in a total of 205 villages in six out of the 13 blocks of the districts, representing a total population of about 600,000 [32]. The pilot was launched in February 2011 for a period of approximately two years. A total of 205 MHAs were recruited through a selection process adapted from the NRHM's guidelines for selecting ASHAs. Selection criteria emphasized characteristics and skills associated with responsibilities related to health promotion and linkages to facility-based care. In consultation with health authorities and community leaders, the project team sought men who were middleaged, married, had a minimum level of education , and were well-perceived in the community. In this paper, we use the terms MHA and male CHW interchangeably, as we do for ASHA and female CHW. Before beginning work in communities, MHAs were trained using the NRHM's ASHA training modules. They were then paired up with ASHAs to conduct the following activities: • Work with, and support ASHAs at the village level in the referral of women and children to facilitybased care; --- Data source This study used data from the evaluation of the MHA intervention, which relied primarily on endline qualitative investigations. Data collection took place in November and December 2012. Specifically, we used data from indepth interviews with ASHAs , AWWs and ANMs ; with women who had delivered at home, community health center or district hospital in the few months preceding the date of the interview ; and with husbands of such women . A purposeful selection of ASHAs, AWWs and ANMs was undertaken, ensuring inclusion of health workers from all blocks and a range of villages based on distance to the block headquarters. Women were selected from the delivery records of three community health centers and the District Hospital . Names of respondents were drawn randomly from the Labor and Delivery Register for the period of September to November 2012, after sorting the records by project or non-project villages. Additionally, three women who had recently delivered at home were selected in the catchment areas of these four facilities. The same procedure was used to select the women whose husbands were to be interviewed. MHAs were also interviewed, but their perspectives were deemed less critical to the achievement of the study's objectives, as they were the target of the intervention, and had been recruited, trained and utilized by the project. The interviews with women and men were designed to explore knowledge, attitudes and behaviors related to RMNCH and to understand MHAs' involvement and the support they provided directly to community members considered the targets for these types of interactions. The interviews with ASHAs, AWWs and ANMs sought to understand the role and relationship of MHAs vis-à-vis other health workers and to explore the type and extent of support provided by MHAs. Informed consent was obtained from each respondent after describing the study objectives. Ethical approval was obtained from the ethics committee of the Department of Health and Family Welfare of the Government of Odisha. --- Data collection and analysis All ASHAs, AWWs and ANMs, women and men were interviewed in their respective homes by locally-based researchers who used pre-tested semi-structured guides in the local language . Each interview involved two researchers, one conducting the interview and the other in charge of note taking and audio-recording. Responses were audio-recorded, transcribed verbatim and translated to English. Data coding and analysis was conducted manually. Themes related to the study's objectives were identified from the interview guides. The transcripts were then read several times and tabularized along these themes, and new themes emerging during the analysis were incorporated. The analysts paid attention to variations in reports within and across types of respondents. The data for this paper are based on the following themes: Challenges and difficulties faced with access to and provision of MNCH care; Opportunities for increased access to, and provision of MNCH care; Perceived roles of MHAs; and Positive and negative aspects of MHAs' work. --- Results --- Study participants Table 1 shows the distribution of the ASHAs, AWWs, ANMs, women and men interviewed by block. All ASHAs interviewed were married, had the required educational qualification of Class 8 or above as stipulated by the NRHM guidelines, were mostly in the 30-45 year age range, and started working in their current jobs between 2006 and 2008. All of the AWWs and ANMs interviewed were married, and had been in their positions for at least a decade. Of the 11 women respondents, five were aged between 20 and 24 years and three under the age of 20. For nearly half the women, this was their first child. The majority of women had no formal education, as the intervention villages included some of the least developed areas of the district. Almost all women interviewed reported agricultural work or wage labor as their main occupation. On average, the men selected for interviews were older and more educated than the women. They were mostly drivers, agricultural workers, or selfemployed , or employed in factories/ mininga profession which entails living away from home for part of the week. From the analysis, participants' responses were organized broadly around the facilitation by MHAs of ASHAs' work, and male engagement activities undertaken by MHAs. More specifically, the narratives elicited from the respondents reflected gender-based divisions of work and space in three core areas of delivery and use of MNCH services: escorting women to health centers for facility-based deliveries; mobilizing women and children to attend Village Health and Nutrition Days and Immunization Days; and raising awareness among men on MNCH and family planning. Respondents' views tended to assign certain activities and practices mainly to one gender or the other. --- Escorting women to health centers for facility-based deliveries Almost all respondents pointed to MHAs' facilitation of women's access to health centers, especially at night, and to support provided while at the facility. Most women interviewed spoke about the risks associated with night deliveries in a context of long distances to facilities and lack of readily available transportation means. They acknowledged the limitations of female CHWs in this regard and welcomed the introduction of male CHWs. "In the night ASHA Maa [ASHA] can't go anywhere, ASHA Bapa [MHA] is Purusha [meaning male] so he can go, he can help everything,", relayed a woman in Telkoi who recently had a facility delivery. Another woman from Banspal, in response to a question on the relevance and usefulness of MHAs and whether they should be maintained beyond the life of the project added: "Because in the middle of the night they will go to the hospital no matter how far it is, or will make phone calls. In the night he can go running for the patient" The narratives from virtually all ASHAs also pointed to the crucial roles played by MHAs around and during delivery, especially at night when they facilitate transport and provide security, a role that female CHWs were not able to play due to security concerns. An ASHA from the Banspal block, regretting the lack of support from husbands noted: "If a delivery goes in the night then, previously I was going alone; the road is full of jungle, can we take our husbands all the time? If he [MHA] is there, I do not feel scared." While accompanying women and their children at night to the facilities was seen as a critical function, women and ASHA respondents also valued MHAs' support with regard to transport. Most of the intervention villages are located in forest and hilly areas and are not connected with motorable roads, making it impossible for the Janani Express Yojana or private vehicles to reach pregnant women and transport them to facilities for delivery. The interview results suggested that the MHAs support ASHAs by making local transportation arrangements to carry pregnant women from their homes to a place where vehicles can reach them. Commenting on the benefits of a male CHW, a woman in the block of Ghatagaon stated: "with ASHA Bapa [MHA], vehicles and other things will be done'. Her peer from Harichandanpur was more assertive, illuminating the gender norms about community service delivery and pointing to possible constraints faced by female CHWs in relation to their own childbearing needs: "He [MHA] is coming in the middle of the night and then wherever there is a problem, if for the vehicle he phones and didn't get, then he goes by walk, then taking the patient and reaching there. And the female ASHA is having two small children, and being a female she is unable to go in the night." . An ASHA interviewed in Banspal shared these perspectives and added: "if a pregnant woman starts getting pain he can call me and call the Janani Express Yojana vehicle and take her." Men respondents were keen to admit that MHAs were indeed filling important gaps related to access to facilitybased care for women and children. Both the men and women in the villages recognized the importance of male CHWs, most of their narratives pointing to the prompt and untiring efforts by MHAs to solve problems, including cycling to various places in search of solutions: "ASHA Maa cannot do the work like ASHA Bapa [MHA]. Suddenly there is some work, by chance if they can't get the Janani Express Yojana vehicle, if ASHA Bapa is there then he can cycle down to Ghatagaon or can arrange a vehicle from another village. ASHA Maa, what can she do?" . Once at the facility, female and male CHWs' roles were seen by men as complementary. A man in Telkoi explained: "Suppose there is any problem or danger, he will catch the ASHA and escort to the medical. If you go to medical taking both of them then it becomes very easy." Division of labor along the gender lines emerged even more strongly from the narratives with regard to actual service delivery at the facility. The data indicate that once at the facility, MHAs handled some tasks outside of the delivery room as needed, which ranged from keeping track of the family's personal items, obtaining medicines, and in cases where a blood transfusion was necessary, acting as an advocate to obtain donated blood. "Male ASHA [MHA], he is a male; how can he touch the female? He does all other work like getting the medicine and other things which the Doctor writes, he gets those things, and shops are bit far." Besides the logistics, engaging with husbands around delivery was seen as a major contribution of male CHWs: "He [MHA] cannot enter in to the delivery room. He brings the medicine which is required and all things he [the health professional] tells; he [MHA] tells the husbands. I can convince the mothers but not the husbands" . Women's narratives clearly emphasized the roles of MHAs during delivery, including filling important gaps due to husbands' and family members' poor support and birth preparedness. A mother in Harichandanpur who had recently delivered at a facility noted: "Jhia ASHA stayed near me, and Purusha ASHA went out and then oil, soap, the things which is given, went running to get those." The presence of a male CHW is critically needed in this instance, especially since, as stressed by another respondent, shops and pharmacies may be quite far from the facility. --- Mobilizing women and children to attend Village Health and Nutrition Days and Immunization Days Respondents' narratives also acknowledged the facilitative roles of MHAs in the planning and implementation of Village Health and Nutrition Days and Immunization Days. Village Health and Nutrition Days and Immunization Days in settings like Keonjhar District are the centerpiece of community-based MNCH service delivery. Services provided at these monthly health events run by ASHA, AWW and ANM include registration of pregnant women, antenatal care , immunization, growth monitoring of children under the age of five years, distribution of contraceptives, provision of drugs to patients as required, and supplementary nutrition to pregnant women [30]. Awareness-raising for high turnout on the day of services is typically conducted by CHWs prior to and the morning of the outreach activities. In their narratives on the roles of the newly introduced male CHWs, ASHAs identified tangible constraints that could only be addressed by MHAs. One of the ASHAs we interviewed in Banspal told us: "Now the far away sahi is covered. There is a jungle in the middle way; the far away village is atop the hill and a single woman alone can't go. If any child is left out for the polio immunization then we four, two Anganwadi Didi and two ASHA Didi go together because there is a jungle. But he [MHA] goes alone by cycling and keeps the cycle in the mid-way and climbs the hill." Even in easier-to-reach places, the role of MHAs in community mobilization was commendable, judging by the answers we received. In Champua for example, an ASHA respondent admitted: "I go calling in the morning of the VHND and immunization Day; the children have not come or making late or may be due to some reason not coming then, he [MHA] goes 2 to 3 times by cycle to call them. Suppose I see someone has not brought her child then he goes again and brings the child." An AWW in Ghatagaon was more emphatic: "We have many households here and there; he [MHA] being a male goes by the cycle and calls them. We girls cannot go to all the places, so he can cycle all the houses and get the reports." While female and male respondents did not acknowledge the relevance of MHAs in VHNDs and Immunizations Days as strongly as the female CHWs, they noted, as a woman respondent in Joda, that "They [MHAs] also come to home for calling for meetings. Meetings used to be done there, so he comes for calling." --- Raising awareness among men on MNCH and family planning Our respondents largely viewed the advent of MHAs as the opportunity to engage with men on family planning and MNCH issues. Women and ASHA respondents viewed the presence of male CHWs as an opportunity to engage with men on matters related to RMNCH. Most female CHWs interviewed pointed to increased engagement of MHAs with men, which to some degree, resulted in positive behavior change. An ASHA in Banspal described: "After his appointment, the husbands who do not understand, whatever we say they avoid and shout at us. He [MHA] convinces the males more. Men used to say that ASHA is coming and misguiding our wives. But the MHA makes them sit and he tells them that it is for your good only. Whenever he does they understand." To stress the gender dynamics in that society, she added: "When I was going to that sahi alone, they used to shout at me. If the Purusha ASHA [MHA] is there and I reach there, no one shouts or anything, rather they will talk and listen nicely." The interviews with men did not seem to be as unequivocal on MHAs' contributions. In general, most women reported not knowing if MHAs talked to their husbands or not; the ones who reported being aware of the engagement said they did not know the subject discussed between the MHAs and their husbands. Men on the other hand, seemed too busy with work and other activities, and as a result, did not pay close attention to the invitations made by MHAs to discuss MNCH issues in group or individual meetings. The only male respondent to acknowledge the engagement by MHA told us the advice on birth preparedness: "He told us to keep money in hand. By chance if there is any problem, the vehicle and all, so we should keep the money properly." The specific work of MHAs on contraceptive use among men also emerged from female CHWs' narratives, and less so from women's and men's. Cognizant of the burden and consequences of high fertility in these poor communities, many ASHAs expressed the wish that men in their catchment areas be more open to, and accepting of contraceptive use. In Champua, one of the ASHAs we interviewed said: "If he [MHA] can convince the males. The main thing is if Purusha [husband] understands then they can convince their wives about the so many children." Some of the ASHA respondents cast a somewhat optimistic view regarding uptake of contraception among men. Some of the ASHAs, AWWs and ANMs interviewed spoke of door-to-door meetings conducted by MHAs, which they said are now convincing the men on contraceptive use. One responded noted: "For the Purusha swasthya Bahini meeting, mainly the condoms were being used; he [MHA] is making it easy for the condom; he conducts the meetings and convinces the men. I could not do with condom; I give them to him [MHA] and he does what is needed." . Whether or not the project resulted in contraceptive uptake among men, almost all respondents hinted that strategies to increase contraceptive use will need to incorporate male CHWs' engagements with men: "Look, a thing like an operation, husbands used to shout on us that how can these girls work with us. ASHA Bapa [MHA] can tell to the husbands to allow their wives to undergo the operation [sterilization]. We cannot tell the males, only ASHA Bapa can and men can understand properly. What I was not able to say, that can be said by him." . Besides the gendered division of labor between which emerged as a constant theme from the narratives, the idea that the newly recruited MHAs provided another set of hands regardless of sex was also apparent from the interviews with female CHWs. In these instances, MHAs were reportedly working as directed by the ASHAs, the health workers with longer experience working in the communities. One ASHA in Harichandanpur reported: "We divide the areas between us to bring the children during immunization." In other instances, one CHW will take the responsibility when the other is otherwise occupied, as illustrated by this report from one ASHA in Telkoi: "He can work when I am absent. Recently a delivery took place; MHA took the whole responsibility because at that time I was on training. About 2-3 cases he has handled alone." --- Discussion Efforts to provide accessible, affordable and quality health care in India's deprived and underserved communities through the National Rural Health Mission notwithstanding [8,11], most parts of rural India remain characterized by worrying high levels of poor maternal and child health indicators. While the introduction of ASHAs and other interventions designed under the NRHM have undoubtedly contributed to the recent improvements [12,26,29], progress remains hampered in rural and remote settings by lack of men's understanding of, and interest in the health of mothers and children [6,19]. The findings of this study shed light on two major, gender-related constraints ASHAs face in their duties. First, MNCH care seeking behaviors in most rural societies are often guided by deep rooted socio-cultural and traditional beliefs, superstitions, myths and misconceptions that can be hazardous to health [6,12,24], and are characterized by imbalanced and unequal decisionmaking favoring men [13,15]. Indeed, rural women do not only face barriers in accessing health services linked to poverty and illiteracy, they also face challenges due to the lack of control of their reproductive health and other health matters [5,24,25]. Second, our intervention area is characterized by challenging terrain of hills and rivers that hinder the development of road networks, which further limits the development of an emergency obstetrics transportation system [10,17]. This type of environment makes the work of female CHWs more difficult. Surprisingly, no study to our knowledge has investigated the implications of these constraints for female CHWs' duties and responsibilities in the context of India's NRHM. Evidence from Rwanda, one of the few countries with male-female pairs of CHWs, indicates that these pairs may be helpful in settings where it may not be safe or socially acceptable for women to travel alone [33]. Escorting pregnant women and women in labor to health facilities is a critical component of efforts to improve the uptake of facility-based MNCH care in contexts like rural India. Our results indicate a division of labor for CHWs along gender lines. As women, ASHAs' work can be limited by their mobility, especially at night or through the hilly and forest areas. The predominantly favorable views from ASHAs on the contribution of male CHWs, all pointing to the complementarity of both genders, are noteworthy, especially with regard to accompanying patients in transit. From ASHAs' and women's narratives, this division of labor continued to hold at the health facility during and around the time of delivery. The presence of a male CHW helped the ASHA to remain focused on the delivery, while the MHA promptly dealt with logistics like obtaining an admission ticket, buying medicines, and arranging for blood transfusions when needed. These joint efforts ultimately result in improved access to services and possibly better quality of care in some areas while at a health facility. According to men and ASHAs interviewed, the coverage of MNCH services improved as the male CHWs improved outreach in a difficult geographical set-up, sharing the work with their female counterparts. Apart from arranging transport and accompanying pregnant women in emergency cases, ASHAs appreciated the support provided by the male CHWs who often did not spare energy and time to walk or cycle, and sometimes climbed hills to reach households in different settlements. Enlisting the support of men is the necessary step and perhaps a pre-condition to improved health for women, newborns and children. As widely acknowledged, despite India's significant advances towards gender equality achieved among some groups, the majority of women still suffer from the effects of extreme inequality and weak position, as evidenced by the country's ranking of 112 out of 134 countries on the global gender gap index [5]. The global gender gap index measures the relative gaps between women and men across four key areas: health, education, economy, and politics. As a consequence, non-use of MNCH services is partly explained by the opposition of husbands, due to limited knowledge about healthy behaviors and actions to take during pregnancy, delivery and the postpartum period [14,15,19,24]. The design of the intervention covered in this paper rests on the premise, as reported in many studies, that men should also be target audiences for women's reproductive health interventions through specially designed channels and messages, given their low knowledge levels and the imbalance in decision-making between men and women in many societies [20,24,25,34]. Our findings tend to suggest that while engagement with men was regarded as a key component of ASHAs' work [26,27], the challenges associated with gender dynamics in Indian society were largely neglected. The ASHAs we interviewed welcomed the addition of male CHWs as partners in creating awareness among and motivating men to embrace positive behaviors and supporting roles regarding the health of their wives and children, including on sensitive issues such as use of family planning services. Most women and female CHWs interviewed expressed confidence in male CHWs' engagement to reduce the uninformed or resistant behavior of men towards use of family planning and MNCH services. These findings, and particularly female CHWs' and women's invocation of the benefits of MHAs' engagement with husbands, challenge the persistent rhetoric on Indian men's involvement in MNCH and reproductive health. In their study in Rajasthan, India, Karol et al. found ASHAs had rather weak knowledge with regard to motivating men and women in the community to adopt healthier reproductive behaviors including use of family planning [27]. Our interviews with community men did not indicate that MHAs' engagement with them was effective at improving men's knowledge on MNCH and family planning. One possible explanation for this finding may be that men perceive other priorities such as remunerated activities as more important in their daily lives than engaging with a male CHW on MNCH and family planning, especially if the issues are not emergency-related matters for the family. Additionally, as male CHWs had newly assumed their role and a refresher training occurred toward the end of the project, they may not have refined techniques to engage men in conversations related to preventive and curative care for their wives and children. Additionally, evidence is emerging on the usefulness of peer support networks among CHWs, which is evidenced by the ways in which the ASHAs and MHAs collaborated. A recent study indicates that social support mechanisms for CHWs are important for improving care and information that CHWs provide [35]. At the core of the NRHM is the balance between facility-based and community-based service delivery, the latter being linked to VHND and Immunization Days, both events requiring strong engagement of ASHAs [11,27,29]. This study suggests that as with the escort of women to health facilities, gender and peer support dimensions associated with the effective delivery of these community-based interventions seemed to have been underestimated in the design of the ASHA program. An overwhelming majority of narratives from female CHWs, women and men pointed to improvements brought about by the recent introduction of male CHWs. Most respondents noted that several logistical problems associated with the planning and implementation of village outreach activities can only be adequately handled by male CHWs alongside ongoing work of female CHWs. --- Limitations While the study focuses on the perspectives of health workers who experienced the MHA intervention and community members with relevant profiles to be targeted for support by CHWs, the lack of baseline data, and a possible bias in recalling the situation that prevailed before the introduction of male CHWs by the project are important limitations. Despite these limitations, the study has shed light on challenges overlooked by the NRHM framework. --- Conclusion Building on the success of the ASHA program of delivering community-based information and care to families in rural communities in India, this study sheds light on male engagement as a strategy to improve the delivery, access and uptake of maternal, newborn and child health in the context of prevailing gender norms and gendered roles in the health care sector [16,20,36]. Our findings provide important new insights regarding barriers that continue to limit the scope and reach of the ASHA program, and unveils the complementarity of male and female CHWs in the delivery of, and increased demand for, MNCH services. The findings on the facilitative roles of MHAs suggest including male CHWs as a part of the NRHM's model for increased delivery of, and demand for MNCH services. It is important, however, to acknowledge that female CHWs are performing important roles in challenging locations to improve MNCH outcomes [37]. The introduction of male CHWs to reinforce these successes, as shown by this study, should operate in ways that do not contribute to widening gender inequalities in favor of men that are rampant in many rural communities in the developing world. --- Competing interests The authors declare no competing interests. Authors' contributions JCF conceptualized the study; JCF and AHS conducted the literature review; SM coded the data; JCF led data analysis; JCF, AHS and SM all contributed to the writing of the paper, and read and approved the final manuscript. --- Authors' information
Background: In response to persistently poor levels of maternal, newborn and child health (MNCH) in rural India, the National Rural Health Mission (NRHM) was launched to support the provision of accessible, affordable and quality health care in deprived and underserved communities. The Accredited Social Health Activists (ASHAs), local women, are trained as health promoters to generate demand for, and facilitate access to MNCH care in their communities. While they are also expected to provide husbands of expectant women with information on MNCH care and family planning, their reach to the husbands is limited. The aim of this study is to describe the influence of a male engagement project on the utilization and community-based delivery of MNCH care in a rural district of the country. Methods: We used qualitative data from the evaluation of a project which recruited and trained male Community Health Workers (CHWs) known as Male Health Activists (MHAs) to complement the work of ASHAs and target outreach to men. This paper uses data from in-depth interviews (IDIs) with ASHAs (n=11), Anganwadi Workers (AWWs) (n=4) and Auxiliary Nurse Midwives (ANMs) (n=2); with women who had delivered at home, community health center or district hospital in the few months preceding the date of the interview (n=11); and with husbands of these women (n=7). Results: Participants' responses are broadly organized around the facilitation of ASHAs' work by MHAs, and male engagement activities undertaken by MHAs. More specifically, the narratives reflected gender-based divisions of work and space in three core areas of delivery and use of MNCH services: escorting women to health centers for facility-based deliveries; mobilizing women and children to attend Village Health and Nutrition Days and Immunization Days; and raising awareness among men on MNCH and family planning.This study sheds light on male engagement as a strategy to improve the delivery, access and uptake of maternal, newborn and child health in the context of prevailing gender norms and gendered roles in rural India. Ultimately, it unveils the complementarity of male and female CHWs in the community-based delivery of, and increased demand for, MNCH services.
2 Sten Langmann et al. --- Introduction Local non-governmental organizations play a unique and important role in community development and poverty reduction because of their embeddedness within local communities . Their locations 'close to the problem or issue' at hand make them sensitive to the needs, resources, and challenges of disadvantaged communities . Local NGOs, therefore, are uniquely placed for community capacity building , which involves 'working with local deprived communities to promote fuller engagement with social, economic, and political life' . Although the importance of local NGO involvement in CCB is well established , little is known about the CCB methods used by these local NGOs. Understanding local NGO CCB approaches is important for two reasons. First, the CCB programmes of larger national and international NGOs have often been criticized by scholars and practitioners as being detached from local conditions, realities, and the real needs of the communities, and, therefore, are seen to have limited abilities to foster community input and participation . Continued emphasis on these CCB approaches risks making 'local and regional actors of secondary importance' and might push people into further hardship . Therefore, a local understanding of CCB could reveal how the different pathways in and out of poverty are construed . Second, local NGOs are often small, informally structured, and focused on specific issues within a relatively small geographical area . Often, they are also resource-constrained , which requires them to partner with local communities and codevelop identities using local resources . A better understanding of how these organizations approach these CCB challenges could assist in making their efforts more viable and scalable. Based on these considerations, this study explored the following research questions. RQ1. How do local NGOs build capacity in local communities? RQ2. Why do local NGOs adopt certain CCB approaches? To answer these questions, semistructured interviews were conducted with members from 18 local NGOs, three local sustainable development experts, and the Dean of a social work college in Tamil Nadu. The interview data analyses suggested that local NGOs recognized that CCB efforts were Local community capacity building 3 often being hindered by discouraging personal outlooks and an acceptance of existing social community structures, with these affective barriers trapping the community members in poverty. To tackle these barriers, local NGOs often adopted an emotional empowerment approach to instil a sense of care and hope and provide space for the communities to transform their voices and challenges into individual and collective action. Therefore, this study makes two main contributions. First, it adds to CCB literature by expanding knowledge on local NGO conceptualizations of the community challenges. Specifically, this study draws attention to the local affective barriers that are keeping communities trapped in poverty. These findings suggested that there is a need for future research to better understand and more fully incorporate the affective dimension of poverty as this could elucidate a more effective pathway for future CCB efforts in these types of communities. Second, this study contributes to the wider NGO literature by providing new insights into the functioning of local NGOs and the methods they use to address these affective community barriers. The study findings indicated that it was important to provide emotional empowerment to local communities. The local NGOs viewed emotional uplift and development as a vital starting point for sustainable CCB. Importantly, this represents a shift away from viewing communities as 'sets of problems' towards approaching them as 'collections of peoples who possess their own problem-solving capabilities,' and could offer an alternative sustainable CCB approach that more effectively addresses the needs of affected communities. To build the argument, current CCB concepts and the role of local NGOs are first examined, after which the research methodology, data collection method, and analytical approach are outlined. The subsequent section presents the empirical findings, and the final section concludes by discussing the implications of the results, acknowledging the study limitations, and providing suggestions for future local CCB research. --- Literature overview Community capacity building CCB, which is under the community development umbrella, focuses on the development of collective community activities to solve local problems and improve local lives and well-being . Capacity building includes the development of 'capacity' as the 'power of receiving, containing, experiencing, or producing' and 'building' as 'gradually establishing or constructing by putting all the parts together' . In local community settings, capacity building is the process associated with developing the community's ability to define, assess, analyse, and act upon the concerns of its members and develop innovative and productive solutions . NGOs have different approaches to CCB efforts, ranging from institutionally driven interventions to process-focused change . The CCB top-down approach seeks to improve conditions in the affected communities by offering training and technical support, lobbying, and/or engaging in advocacy activities . However, top-down strategies have been increasingly criticized for being detached from local reality and overlooking context-based solutions, creating a culture of dependence in local communities and stifling growth in their initiatives, innovations, and self-reliance , and contributing to a lack of community trust in the intentions of outsiders . Top-down strategies, therefore, risk making communities no more than passive development recipients . Because of this criticism, the research focus has shifted to bottom-up CCB approaches , which claim that sustainable development can only have long-lasting impacts when community aspirations, dreams, and values are embraced and marginalized communities are effectively represented and able to defend their interests . Bottom-up CCB, therefore, aims to combine the strengths, commitments, resources, and skills of a community, use these to improve the community's collective quality of life , equip the disadvantaged with the resources to overcome the obstacles that restrict them from making a living, confront and dismantle existing power structures, and challenge inherent injustice . --- Empowerment and bottom-up CCB A central construct in bottom-up CCB is empowerment , which in development terms refers to the process that enables the disadvantaged to identify and overcome their life obstacles and increase their agency . Empowerment gives communities the ability to perceive themselves as being capable of gaining control over their lives and being able to action long-lasting change. Empowerment renews a community's aims and increases community cohesion and cooperation , which motivates community residents to take ownership of their development through community inclusivity . Local community empowerment requires psychological empowerment , which is defined as the 'psychological aspects of processes through which people gain greater control over their lives, take a proactive approach in their communities, and develop critical understandings of their sociopolitical environments' . Zimmerman's seminal work explained that this was a complex process that involved people having a belief that they could influence the desired outcomes and the ability to learn how to accomplish them, an awareness of the behavioural options and choices to achieve their set goals, and an ability to self-initiate the actions. Studies have found that emotions also play an important role in this rational process . Christens et al. defined emotional empowerment as a process that included feelings of hopefulness and resilience, and along with other scholars claimed that empowerment required critical community-level attention because of the feedback loops between the individual and community levels. --- CCB and local NGOs Local NGOs are in a unique position to empower communities using bottom-up CCB approaches. Because these local NGOs are at the coalface, they can focus on poverty reduction within relatively small geographic areas and are not hampered by formal organizational structures , they are generally more sensitive to the particular needs, resources, and challenges of disadvantaged communities and more able to respond to their changing needs and realities . However, local NGOs do not generally have extensive resources, which means they need to partner with the local communities to codevelop the new realities . Therefore, empowerment and bottom-up CCB are more natural approaches for these relatively small NGOs. Despite the unique position of local NGOs, little research to date has examined the methods they use to build community capacity . While local NGOs are in key positions to create links between communities and external institutions , their main CCB challenges lie in translating the broad capacity-building concepts into local actions to ensure that the goals they wish to achieve within specific communities can come to fruition, they have the means to attain these goals, and they understand the impact of their capacity building on the local community dynamics . Maintaining a consistent level of community involvement and community ownership in CCB is challenging . Therefore, this study was established to better understand the CCB methods and actions taken by local NGOs to ensure local community engagement. --- Methods --- Research context and participant interviews Tamil Nadu in India was selected as this study's research site for several reasons. First, India has the highest number of NGOs per capita at roughly one NGO for every 400 people , with many of these being local NGOs, which fitted the research focus. Second, Tamil Nadu is one of the most densely populated states in India and poverty is a significant issue. Third, because Tamil Nadu has a higher poverty reduction rate than the Indian average and a strong focus on inclusive development , there has been significant involvement by local NGOs; therefore, understanding their approaches could enlighten and inform best practice. The study participants were recruited through a combination of personal referrals and snowball sampling. This approach enabled the research team to identify the key actors 'on the ground who are needed to fill in the gaps in our knowledge in a variety of social contexts' . It was an important research aim to ensure that the participating local NGOs had varied sustainable development foci so that data could be collected on a range of different approaches to address the local concerns . Before the interviewee selections, the NGOs were screened by conducting checks with third parties to avoid non-genuine NGOs ending up in the sample. A highly useful support tool in the data collection process was the register provided by a local NGO, in which the bonafide local NGOs in Tamil Nadu's capital city Chennai were identified, which included state-registered and unregistered NGOs, that is, organizations working on a very informal level, like family NGOs, sole individuals, and student groups. These data collection procedures meant that meaningful referrals were given. The final participants were eighteen local NGOs, three local NGO experts, and the principal of a social work college in Chennai. The 18 local NGOs varied significantly; 12 were formally registered organizations and 6 were informal. In most cases, the directors or managers provided information , and in other cases, staff members and/or volunteers provided the information. Therefore, this research drew upon a diverse group of local NGOs, all of which were operating within the relatively narrow geographical setting of a single city, although with a population of over 11 million and over 496 local NGOs. --- Data collection and analysis Open-ended, face-to-face interviews were conducted with all participants. To avoid leading the participants and compromising the integrity of the data, no participants were asked directly about their CCB approaches . Rather, the questioning focused on the functions and operations of the local NGOs, which allowed each participant to give their personal views on their organization's focus, led to more accurate accounts of the local NGOs' work, and allowed for unexpected discoveries, metaphors, and analogies . The interview data analysis was conducted using the 'Gioia method' ; a three-stage analytical process that structures and analyses the data to derive meaning from them. In the first stage, a combination of open coding and InVivo coding was used to identify the patterns and variances in the participant narratives and preserve the strong statements in which human experience was the key driver, which fore fronted the subjectivity . At this stage, the first-order codes directly relevant to the research questions were developed. In the second stage, the initial codes were expanded using axial and selective coding procedures to develop the secondorder concepts. Many of the patterns that emerged identified the influence that discouraged personal outlooks and an acceptance of existing social community structures had on local NGO activities. Broad patterns started to emerge, which revealed the mechanisms used by the local NGOs to counter the feelings of disempowerment and emotionally empower the communities. The identified themes were repeatedly discussed and the interpretations tested, from which three aggregate themes emerged as shown in the data structure in Figure 1. The third stage used the aggregate themes to construct a process model that showed the approaches being used by local NGOs to counter the affective community barriers . The subsequent section unpacks this process model and outlines its major components and relationships. --- Empirical results Finding 1: local NGOs conceptualize poverty in their own way A broad insight emerging from the analysis confirmed the marked differences between the ways that local NGOs and larger national and international NGOs viewed and approached CCB. As previously observed , the participants acknowledged the top-down-driven development efforts of the larger NGOs but were also critical of the effectiveness on local beneficiaries. One participant stated that 'this kind of development is there, but it does not advantage everybody. The trickle-down never happens' . However, the local participants emphasized that the starting point for their efforts was to identify what was important to the communities, advocate for the identified community needs, and support the communities to resolve their dependencies, as exemplified by two participants who said: 'you have to help people according to their needs, not yours' and 'the protests should come from the bottom' . The participants stated that their efforts were less focused on providing material support and more focused on creating relationships with community members, listening to their voices on the ground, and assisting the communities to take ownership of their own development to initiate positive change. Building, and sometimes regaining, trust through transparency was considered critical to the success of these CCB efforts: 'Your entire relationship [with the community] is based on trust. Be very transparent with people. It's the best way to win over their confidence. That is something very important to them . . . .' . An unexpected insight was that local NGOs adopted these different approaches because of their small-scale and limited resources and because they had varied conceptualizations regarding poverty and its potential solutions. The narratives pointed strongly to the affective barriers that were preventing the communities to advance out of poverty. The local NGO participants saw these affective barriers as being an emotional feeling in the communities of being trapped in their circumstances with little or no alternatives to rise out of poverty:'[community members] just don't feel like they can break out of the whole thing. They assume this is the only choice they have' . This discouraged emotional state was found both at the individual level and in the broader community social acceptance of existing social structures. --- Discouraging personal outlooks The local NGOs recognized that individuals in the affected communities often had discouraged personal outlooks that could stifle their CCB, which generally stemmed from low self-esteem and previous humiliations. First, these internalized feelings of low selfesteem made the affected people feel powerless and unimportant, which, in turn, made them unable to overcome their disadvantages and better their situation: 'empowerment [is] stifled by the people's low self-esteem. People think they are too small to make a difference. [ . . . ] It is much harder to convince people . . . once [they] feel they are not the social authority to pressure [changes] . . . two is that they do not want to take the trouble either' . Another participant explained that these discouraging personal outlooks and the related inactions highlighted the lack of knowledge on how to help themselves and the uncertainty they felt about the effects of their efforts: 'there is no chance that [ --- disadvantaged people] want to help themselves, [if] a) they don't know how, and b) they are not sure if it is going to help them' . Second, the feelings of low self-esteem were often related to previous humiliations by community members, which discouraged them from accessing the services or resources needed to improve their situations. These humiliations were often related to bureaucracy: '[people] cannot get a certificate, cannot get a job, cannot get anything, cannot get the job done in a timely manner, let alone the frustration and humiliation and the money involved in it' . The same participant pointed out that humiliation was felt by the people in their 'struggle to follow regulations and procedures' to obtain goods and services they may often be entitled to in the 'invisible layers' that deny critical access, and that perpetrators are 'getting away scot-free' . --- Acceptance of existing social structures The acceptance of existing societal structures by community members was also an important factor perpetuating the community vulnerabilities. This acceptance resulted from a combination of externally reinforced social norms and family expectations. First, abusive behaviours towards low-status groups are often accepted social norms in the wider society. One participant gave an educational example of the higher caste students' abuse of lower caste students: 'Since he is a boy from an underprivileged caste, they [other pupils] most likely are going to treat him really badly, because they think that's the way it should be done' . Because of the repetition and lack of consequences for these types of abusers, vulnerable communities develop a mindset that these 'social norms' are 'normal': [the] 'problem again is when you are physically abused and going through something that would be very common, you sort of succumb to it' . Second, it was also highlighted that the larger social structures in which the poorer communities find themselves could contribute to the emergence of the same problems within families, especially concerning education; for example, many girls from poorer communities often leave school early: 'they all enroll female students, but by primary school completion, they will have dropped out for many different reasons, including family' . Another interviewee added that girls from poorer communities shy away from attaining education as they may think that: 'my mother was not educated, my grandmother was not educated, my father, why do I need to study?' . The parents also often discourage their daughters from education: 'It is ok if I educate my son. There is no need to educate my daughter. Why do I have to spend time educating her? Let her become a maid somewhere else' . --- Finding 2: Emotional empowerment to address affective barriers The analysis indicated that local NGOs see empowerment as a critical mechanism in tackling CCB challenges: 'empowerment, [ . . . ] is the ultimate' : particularly as it provides a more future-looking perspective in that: 'we don't want to make [the community] dependent on us' . The participant narratives revealed that these empowerment efforts happened at an emotional level and that it was through appealing to community members at an emotional level, giving the people the space to voice their feelings and challenges, and igniting feelings that collective action could make a difference that local NGOs sought to initiate community-led CCB. Figure 2 summarizes the three-step emotional empowerment process that emerged from the data and the activities that local NGOs deployed. The first important emotional empowerment step was to encourage the community members to adopt hopeful and caring perspectives about their current situation because if the community did not develop a new sense of care toward its own situation and did not reverse their affective feelings of helplessness, any local efforts would be ineffective: ' [development] is all about affection. How you react to the community and how you build the affection for [development] . . . If you care enough [about your development] then you will do it yourself' . This step was also important in helping the communities understand that they were not alone in their struggles and to motivate feelings of collective resistance to change the current situation. One NGO participant said: 'we literally make [the community] feel why they are resisting. That's the kind of speeches sometimes you have to give to the community . . . we also draw parallels to other social movements . . . to educate the communities to assert their rights' . The study participants felt that appealing to the communities to battle their acceptance of the status quo at an emotional level was more effective than traditional legal approaches because it encouraged them to reject any further humiliations that could add to their low selfesteem: 'we touch on an emotional base [with the parents] . . . if you work on a legal parameter, you are putting the family into shame, and it is not going to change and sometimes it backfires' . Therefore, the local NGOs mainly focused on being visible within the local communities, building relationships with community members, and initiating conversations to change the prevailing poverty narrative. Second, when the relationships were established and the communities were starting to challenge the status quo, the next emotional empowerment step was to create spaces for the community members to voice their feelings and challenge each other and the external authorities: 'A lot of the work we do is about increasing community responsibility . . . . to give them the courage to state their opinions. To give them the courage to state their opinions irrespective of whether it coincides with mine or not is the biggest step of all' . An important way to create the space for community members to find their voices, gain acceptance, and provide support was by establishing self-help groups. One participant exemplified a self-help group for mentally ill women that had had positive results: 'We have formed self-help groups and they are all mentally ill women. The self-help groups are not only high caste or low caste women. It is a nice mix, and we find that they really help each other. Even if it is just to get out of bed, they help each other. We do promote that' . Another participant explained how the voices and self-help group choices in the communities generated positive emotional ripple effects and gave an example of parents sending children to school: 'We have a self-help group saying that all children should be in school, so [the community] sends all their children to school. In the next community also, so it has that ripple effect' . Third, the analysis indicated that the last step in emotional empowerment was the translation of these voices into behaviours by motivating the communities to take collective action to improve their lives and the lives of future generations. This was done by encouraging the community to take ownership of the situation and make decisions based on their own needs. For example, a study participant highlighted the importance of communities taking ownership when it concerns resource dispossession: 'We challenge or combat dispossession . . . indigenous communities are being dispossessed of their resources . . . we show the means or ways how to do it. We enhance their capacity . . . we make communities understand what their right is, and we make them monitor it . . . if you want to fight, we tell you how to fight . . . sometimes, [initiatives] backfire. Sometimes, somewhere, we save people from eviction, sometimes we have let go of evictions. But the matter of fact is we have empowered communities to stand up and decide what they want' . The same interviewee added that this was the key because community action makes it harder for the responsible authorities to ignore issues: 'If tomorrow I go and negotiate with [the authorities] of Chennai, saying "homeless people are so marginalized," the [authorities] will say "what is your problem?" But when homeless communities themselves go and negotiate, [they] can't say "what is your problem." It's their problem. We only show them the means or ways how to do it' . Visible throughout these three community empowerment stages was the local NGOs' focus on the emotional level and the effectiveness of these approaches. For example, one participant explained how parents were emotionally approached by questioning them: 'how was your development during that time, what were the opportunities, and facilities you had? What opportunities and facilities do you have now to support your child? Do you want your child to be like you? What is stopping you from sending your child to school?' . Positive results were observed from these emotional approaches: 'we roughly identified 389 children, child laborers in the target area. Out of the 389, we have about 172 children that have been stopped from going to work by their parents. The other half, the children stopped going to work on school days . . . only on Saturdays and Sundays along with the mother, to help the mother. Within three years we were able to reach this point that is the reason, as you said, to beat poverty at the ground level' . Another NGO also noted how changes within communities fostered even more change: 'We were able to bring out some children because rather than us going and telling the parents to not send their child to work, when a child from the same situation shares his or her experience, that had a very good effect. So, through the children, some [other] children were really interested. [ . . . ] So, through this, we just took about 20 students altogether and these children we found escaped the cycle, and then they went and shared, which had a really good impact' . --- Discussion of key findings This paper explored the CCB approaches being used by local NGOs in Tamil Nadu, India, to build capacity in local communities and their reasons for adopting these approaches. It was found that local NGOs mainly focused on empowering local communities at an emotional level by offering them a more hopeful perspective on overcoming their challenges, which, combined with providing spaces to voice their challenges and feelings, allowed the communities to initiate individual and collective development action . Local NGOs used these emotional approaches to overcome the individual and community-wide affective barriers that not only discouraged positive change but also perpetuated existing inequalities. These findings offer two main contributions. First, this research expanded the current understanding of the perceptions local NGOs have of community struggles and the effects that affective barriers have on keeping communities trapped in poverty. Therefore, this inherent affective dimension of indentured poverty in disadvantaged communities needs further discussion in CCB research and practice because this community dissatisfaction about being disadvantaged, and the feelings of emotional barrenness and helplessness directly contribute to community poverty. This insight was consistent with prior research that the setbacks experienced by people struggling to improve their circumstances can result in general feelings of discouragement . As two of the key affective mechanisms promoting negative community attitudes and behaviours and affecting well-being, this study specifically examined the feelings of low self-esteem and the acceptance of the existing social structures. However, as alleviating this inherent emotional poverty dimension could be significantly more challenging than addressing material poverty , broader CCB research needs to pay greater attention to these psychological processes and outcomes when assessing the effectiveness of CCB development efforts , that is, a stronger research and practice focus is needed on the affective dimension of poverty to ensure current CCB approaches are more sustainable. Second, this study provides new insights into the functioning of local NGOs and the approaches they use to address these affective barriers. It was found that the local NGOs tended to engage the communities on an emotional level and that the 'realities on the ground' guided the local NGOs to build community capacity first by instilling the people with the confidence to adapt in the face of setbacks . The specific process employed by the local NGOs expands current PE at the local level and shows how people can become aware of their behavioural choices and can develop a belief in their abilities to initiate action . These emotional empowerment approaches differed significantly from the empowerment approaches suggested in CCB literature and general empowerment theory, which predominantly focus on economic outcomes, material well-being, and people's ability to make choices . Importantly, the local NGOs' emotional empowerment process does not view the disadvantaged communities as just a set of problems; rather, it sees the community as comprising people that can solve their own problems. These 'human' capacity-building insights about who needed to be empowered revealed a promising avenue through which community outlooks could be changed . The inclusion of these considerations in CCB programmes could yield better results than initiatives such as aid, education, media, and information campaigns, which have been found to be ineffective in changing behaviours, addressing local needs , or overcoming the patron-client operating modes most often adopted by larger NGOs . This study indicates that the key foundation to building these relationships is emotional connection, without which 'proper trust does not occur' . --- Limitations, future research, and conclusion This study was conducted within a set of boundaries and in acknowledging these limitations, the following directions and avenues are suggested for further research. First, this study's emphasis on local voices needs to be included in development practice as the local voices in Tamil Nadu may not be representative of local CCB efforts in other contexts. Therefore, replicating this study in other local contexts could examine the scalability of the local NGO operations in vulnerable communities that could use support. Second, while this research highlighted that local NGOs sought to realize positive change through emotional empowerment, the effectiveness of this process was not fully investigated. Although some anecdotal evidence of the local NGOs' success was found, as this was not the focus of the study, the long-term community CCB outcomes were unclear. Studies by Islam suggested that the local NGO impact was less than claimed, with many NGOs merely being providers of goods to 'consumers' rather than 'facilitators' of empowerment . Future research, therefore, could benefit from examining the outcomes of these types of CCB local approaches. Third, this study was directed at participants within the local NGOs who were knowledgeable of the community hardships. Future studies could, therefore, focus more explicitly on these underlying emotional approaches and the influence of the local NGO's activities, which could reveal additional mechanisms, relationships, and insights beyond those discovered in this study. This study also did not consult community members on their experiences with poverty. Future studies could include these voices to better understand the community's feelings of lived poverty. Utilizing broader data ranges could also uncover hidden complexities and emotions that cannot be put into words. In conclusion, this research started to answer some of the questions related to local NGO CCB approaches to assist the disadvantaged and contribute to their continuing struggles with poverty and deprivation in many places around the world. Future research is needed to further unpack the role local NGOs play in CCB and the impact they have on the affected communities. --- Data availability Research data are not shared, as our ethics application of Curtin University does not allow that. --- Declarations of interest None.
Locally based non-governmental organizations (NGOs) play an important role in community capacity building (CCB). Because these NGOs are generally located close to the affected communities, they have the local knowledge to identify problems and assist the affected communities to address them. However, the methods these local NGOs use to build capacity in the local communities and the reasons they choose certain CCB approaches are not well known. To enhance the knowledge in these areas, this study conducted semistructured interviews with local NGOs and local NGO experts in Tamil Nadu, India. It was found that local NGOs build capacity in communities using emotional empowerment; a process that involves providing communities with (i) a sense of care and hopefulness, (ii) spaces to voice their feelings and challenges, and (iii) support to transform their voices into community action. Local NGOs adopted this approach because they found that long-term disadvantaged communities had high affective barriers, such as discouraging personal outlooks and an acceptance of existing social structures, which prevented them from taking ownership of the situation. Theoretically, these insights contribute to CCB literature by drawing attention to the community's psychological processes and emotional empowerment characteristics. They also add to wider poverty debates by highlighting the affective community barriers that perpetuate existing inequalities.
Introduction What are the partnering and par ent hood tra jec to ries of sex ual minor i ties? Have these life course tra jec to ries changed across cohorts? Existing the o ret i cal and empir i cal schol ar ship pro vi des lit tle guid ance in answer ing these ques tions. Theories of demographic change pri mar ily focus on explaining declines in rela tion ship dura tion, marriage, and par ent hood in the gen eral pop u la tion . Such per spec tives are not designed to describe cohort change among the les bian, gay, and bisex ual pop u la tion, which was his tor i cally excluded from mar riage and var i ous routes into par ent hood . Empirically, most rep re sen ta tive data sources avail able to study sex ual minor i ties are lim ited to cores i dent cou ples consisting of two per sons of the same sex or gen der . Although these data are infor ma tive in under stand ing par ent hood and part ner ship out comes of same-sex cou ples at a given time, they hin der our understand ing of union for ma tion and par ent hood as dynamic pro cesses that evolve over the life course. Moreover, these data rep re sent only sex ual minor i ties that selected into partnering. Recently, large-scale nation ally rep re sen ta tive sur veys started incor po rat ing ques tions about sex ual iden tity. Nonetheless, the oppor tu ni ties these data sources pro vide to study the life course dynam ics of partnering and parent hood among sex ual minor i ties have remained underexplored . In this arti cle, we use infor ma tion on self-reported sex ual iden tity and ret ro spective and pro spec tive coresidence his to ries with part ners and chil dren to describe cohort change in the fam ily for ma tion tra jec to ries of sex ual minor i ties. We answer two main ques tions: What are the fam ily for ma tion tra jec to ries of les bian, gay, and bisex ual indi vid u als ?1 Was there a cohort change in the type of tra jec to ries? Using data from the Understanding Society sur vey , we con struct the life courses of a rel a tively large rep re sen ta tive sam ple of LGB women and men between ages 18 and 40. To exam ine cohort change, we com pare a cohort that reached mid-adult hood before most major legal changes took place in the United Kingdom to a cohort that expe ri enced those changes dur ing early adult hood . We employ a sequence anal y sis approach to answer our ques tions. Previous research focused pri mar ily on par ent hood in same-sex unions or on union for ma tion and dis so lu tion . The sequenc ing of part ner ships and par ent hood over the life course pro vi des a rich descrip tive account of cores i dent famil ial expe ri ences by documenting the occurrence, order, tim ing, and dura tion of mul ti ple fam ily-related events. We find descriptive evi dence that the like li hood of belong ing to fam ily tra jec to ries char ac ter ized by singlehood and tra jec to ries that include par ent hood has declined. In con trast, the like li hood of belong ing to tra jec to ries in which indi vid u als live with a part ner but not with chil dren has increased across cohorts. --- Family Trajectories in the LGB Population We start the lit er a ture review with an over view of existing knowl edge about union for ma tion, dis so lu tion, and child bear ing among LGBs that guides sev eral expec tations regard ing fam ily life course tra jec to ries. Subsequently, we dis cuss how LGBs' Family Formation Trajectories and LGB Cohort Change tra jec to ries fit within existing the o ret i cal frame works and gen eral nar ra tives of demographic change. --- Union Formation and Dissolution Cross-sec tional research in the United States sug gests that LGBs are less likely to be in a cores i den tial rela tion ship than het ero sex u als . Lesbian women's part ner ship rates are sim i lar to those of het ero sex ual women, while part ner ship rates are con sid er ably lower among gay men and among bisex ual men and women . Moreover, stud ies on atti tudes toward partnering have found that les bian, gay, and bisex ual women and men desire to be in rela tion ships and to marry at sim i lar rates as het ero sex ual women and men across var i ous countries . Similarly, Meier et al. found mod est dif fer ences across sex ual iden ti ties in rela tion ship val ues among young men and women in the United States. There are sev eral fac tors hin der ing LGBs' union for ma tion that could shape dif fer ent partnering tra jec to ries across cohorts. First, fewer oppor tu ni ties exist to meet suit able part ners because fewer peo ple iden tify as LGB than as het ero sex ual . Moreover, in con trast to het ero sex u als, it is rel a tively uncommon for same-sex cou ples to have met their roman tic part ners through fam ily or in school . However, these bar ri ers have less ened over time; same-sex rela tion ships are more prev a lent, and nonheterosexual iden ti ties are more vis i ble . Also, decreased parental con trol and increased geo graphic mobil ity allow peo ple more inde pen dence to form unions . Furthermore, the emer gence of online dat ing helped to facil i tate con nec tions between LGBs more eas ily. Second, minor ity stress can lower the "returns" from being in a rela tion ship. Minority stress is caused by dis crim i na tion and microaggressions related to a spe cific minor ity sta tus . LGBs expe ri ence many of these stress ors when they enter a same-sex union, which could deter peo ple from enter ing rela tion ships . Studies in Germany and the United States have found that peo ple in same-sex rela tion ships are more likely to expe ri ence paren tal and social dis ap proval and express more con cerns about their part ner's accep tance by fam ily and friends than are peo ple in dif fer ent-sex rela tion ships . In an anal y sis of pref er ences among online dat ers across eight countries, Potârcă and col leagues found that resid ing in a sup port ive environ ment or region with for mal rec og ni tion of same-sex unions was asso ci ated with an increase in long-term dat ing inten tions among indi vid u als looking for same-sex part ners. Given the rapid increases in accep tance of same-sex cou ples over the last decades , we would expect that youn ger cohorts of LGBs are more likely to seek long-term unions than older cohorts of LGBs. Third, insti tu tional dis crim i na tion, such as the legal exclu sion from mar riage, can fur ther increase the costs and reduce the ben e fits of being in a cores i den tial union for older LGBs. Same-sex mar riage was not legal for large parts of older LGBs' lives. Marriage, unlike cohab i ta tion, is related to insti tu tional ben e fits in many con texts. For instance, legal iz ing same-sex mar riage improved health out comes among indi vid u als in same-sex unions by increas ing access to health insur ance . Moreover, reduc tions in insti tu tional dis crim i na tion are gen er ally related to increased well-being for same-sex cou ples . The devel op ments discussed so far would pre dict an increase in the prev a lence of LGBs' cores i den tial rela tion ships across cohorts. However, reduced stress ors and discrim i na tion might not directly increase part ner ship rates. For exam ple, the legal i za tion of same-sex mar riage did not increase the shares of LGBs in a union in Mas sa chu setts . Furthermore, some peo ple in same-sex mar riages express ambiv alent feel ings toward mar riage and view it as a patri ar chal and heteronormative institu tion, whereas oth ers express their con cerns that mar riage could fur ther assim i late sex ual minor i ties into main stream cul ture . Singlehood could be more appeal ing to LGBs than to het ero sex u als thanks to strong friend ship net works , and youn ger LGBs might opt to reject the heteronormative script of fam ily for ma tion that is embed ded in cores i den tial part ner ships . Taken together, these trends could the o ret i cally imply cohort sta bil ity or even a decline, rather than an increase, in the prev a lence of partnering among LGBs. The fore go ing dis cus sion has cen tered on union for ma tion pro cesses, but the same fac tors are also rel e vant to under stand ing union dis so lu tion among LGBs. Same-sex unions are more likely to end in sep a ra tion than dif fer ent-sex unions in some countries , pos si bly because of the neg a tive effects of stigma, dis crim i na tion, and lack of fam ily sup port on rela tion ship qual ity. Obstacles toward par ent hood and mar riage can prevent LGBs from investing in cores i den tial roman tic rela tion ships . From this per spec tive, minor ity stress and stigma are likely to lead to rel a tively high sep ara tion rates, but these could have declined across cohorts as invest ments in rela tionships became more avail able to LGBs . There are also rea sons to expect sep a ra tion rates to increase or remain unchanged across cohorts. For instance, higher sep a ra tion rates can stem from LGBs giv ing less impor tance to life long com mit ment in rela tion ships than do het ero sex ual women . Similarly, Lau did not find changes in the sta bil ity of same-sex unions across cohorts in the UK and suggested that, among older cohorts, only com mit ted same-sex cou ples might have decided to start liv ing together. Finally, sep a ra tion and serial cohab i ta tion have become more com mon across cohorts in the gen eral pop u la tion , which could have made sep a ra tion and repartnering among youn ger cohorts of LGBs prev a lent as well. In sum mary, mul ti ple social and demo graphic forces sug gest that the prev a lence of tra jec to ries involv ing part ner ships should increase across cohorts, but expec ta tions regard ing changes in the prev a lence of union insta bil ity are less clear. --- Pathways Into Parenthood Most evi dence on par ent hood among LGBs is lim ited to child bear ing and par ent ing in same-sex cou ples. Overall, there is con sis tent evi dence that women, and espe cially Family Formation Trajectories and LGB Cohort Change men, in same-sex cou ples are less likely to coreside with chil dren than indi vid u als in dif fer ent-sex cou ples . However, there is var i a tion within the LGB pop u la tion. Recent research in the United States showed that bisex ual women are as likely as het ero sex ual women to coreside with chil dren. In con trast, bisex ual and gay men are less likely than het ero sex ual men to coreside with chil dren . Trends over time are mixed. Kolk and Andersson found that child bear ing within same-sex mar riages increased con sid er ably over time for Swed ish women. In con trast, Gates reported that the share of same-sex cou ples liv ing with chil dren in the United States has declined. There are numer ous path ways into par ent hood for LGBs. A prominent path way among older LGB cohorts was through child bear ing in dif fer ent-sex rela tion ships . Studies have suggested that gay men con tinue to expe ri ence pres sures to form a dif fer ent-sex union . Sexual minor ity women also con tinue to have higher rates of unwanted births . Nonetheless, par ent hood through a pre vi ous dif fer ent-sex rela tion ship is becom ing less com mon among same-sex cou ples in the United States . Alternative path ways to par ent hood, which have become more accessi ble for LGBs over time, include adop tion, assisted repro duc tion tech niques , and sur ro gacy . However, these options require con sid er able eco nomic resources and plan ning . Therefore, expanding access to routes into par ent hood could have increased the num ber of same-sex cou ples with chil dren, but these increases are lim ited to those with eco nomic resources. Relatively low lev els of par ent hood among LGBs can also be related to intentions and desires to become a par ent. Studies on par ent hood desires have documented that most gay and les bian peo ple would like to have chil dren, but this share is lower than for het ero sex u als and bisex u als . Nonetheless, the gap between desired and observed par ent hood is greater for gay men and les bian women than for het ero sex ual men and women . Several stud ies have documented how minor ity stress and the expe ri ence of dis crim i na tion reduce par ent hood inten tions among gay men and les bian women, while favor able pol icy envi ron ments and involvement in the LGBT com mu nity increase par ent hood desires . This would imply that the gap in par ent hood desires between gay/les bian and het ero sex ual indi vid u als is smaller in more favor able con texts. Furthermore, soci e tal pres sures to fol low main stream fam ily tra jec to ries, includ ing the for ma tion of "nuclear" fam i lies, could increase for LGB indi vid u als as pos si bil ities to access insti tu tions like mar riage and par ent hood expand . Because of the focus on same-sex cou ples in the lit er a ture, we know very lit tle about LGBs' expe ri ences with sin gle par ent hood. However, stud ies have con sis tently found that sex ual minor ity women are more likely to expe ri ence unwanted births . Moreover, given that many same-sex par ent fam i lies are formed after the dis so lu tion of a dif fer ent-sex union , it is likely that sin gle par ent hood is prev a lent for some part of LGBs' life course, espe cially among older birth cohorts. However, it is unclear to what extent this was com mon and whether it changed across cohorts. In sum mary, evi dence sug gests that social, legal, and demo graphic forces have vary ing impli ca tions for youn ger LGBs' expe ri ence with par ent hood over the life course. The increas ing accep tance of same-sex part ner ships could have reduced the prev a lence of par ent hood within dif fer ent-sex rela tion ships among LGBs, whereas alter na tive routes toward par ent hood have become more avail able. How the rel a tive weight of both trends trans lates into cross-cohort trends in life course tra jec to ries that include par ent hood is unclear. --- Demographic Change and Sexual Minorities Existing nar ra tives of demo graphic change are not constructed to under stand fam ilyrelated changes among the LGB pop u la tion. These nar ra tives focus on a retreat from heg e monic fam ily tra jec to ries driven by ide a tional change in the gen eral pop u la tion , eco nomic inse curity , and gen der rela tions as driv ers of demo graphic change . Among these driv ers, only ide a tional shifts help to under stand demo graphic change among LGBs. The increas ing prev a lence of same-sex unions and fam i lies can be perceived as soci e tal changes toward self-real i za tion . However, this gen eral under stand ing does not pre dict what type of fam ily tra jec to ries exist among LGBs and how dif fer ent fam ily tra jec to ries have changed over time as oppor tu ni ties to form a fam ily expanded. To illus trate, rather than retreating from insti tu tions like mar riage and par ent hood, the LGB pop u la tion has made advancements toward accessing these insti tu tions in recent decades in many countries. As a result, demo graphic change within the LGB pop u la tion will likely look very dif fer ent from that of the het ero sex ual pop u la tion. To empir i cally doc u ment demo graphic change through life courses, past research has quan ti fied change in the type, com plex ity, and diver sity of fam ily tra jec to ries in the gen eral pop u la tion . Such empir i cal approaches could be more eas ily appli ca ble to LGBs than the o ries of demo graphic change. Past stud ies found that fam ily tra jec tories in the gen eral pop u la tion became more diverse across indi vid u als. The heg e monic tra jec tory of early life long mar riage and par ent hood gave way to alter na tive tra jec tories, includ ing cohab i ta tion, sep a ra tion, repartnering, and sin gle par ent hood . The evi dence of cohort changes in the complex ity of indi vid ual tra jec to ries in the gen eral pop u la tion is mixed. McMunn et al. found that partnering tra jec to ries became more event ful with increases in repartnering, while par ent hood tra jec to ries became less com plex over time in the UK. Van Winkle found that fam ily tra jec to ries became more com plex across cohorts in var i ous countries, but cohort changes were rel a tively minor com pared with cross-national dif fer ences. In the case of the LGB pop u la tion, the option to marry and have chil dren through path ways such as adop tion and ARTs could increase the diver sity and com plex ity of fam ily tra jec to ries across cohorts. This can espe cially be the case if some of the LGB pop u la tion do not fol low heteronormative fam ily tra jec to ries, whereas oth ers embark on more heteronormative path ways of fam ily for ma tion that were pre vi ously Family Formation Trajectories and LGB Cohort Change unavail able . However, whether com plex ity and diver sity in fam ily tra jec to ries between indi vid u als increased depends on how diverse and com plex they were in older cohorts. It is unclear how typ i cal or dom i nant fam ily tra jec to ries, such as lifelong singlehood, were among LGBs in the past. In short, pre vi ous the o ret i cal per spec tives of demo graphic change have focused on major ity pop u la tions and pop u la tion aver ages, while empir i cal accounts sum mariz ing demo graphic change have not yet stud ied the LGB pop u la tion. Therefore, in the remain der of the arti cle we pres ent an anal y sis that directly acknowl edges and focuses on the con tex tu ally unique expe ri ence of minoritized groups . --- Study Context The UK's fer til ity and union dis so lu tion rates are higher than those of other Euro pean countries but lower than those of the United States . The envi ron ment for LGB indi vid u als has changed rap idly and pos i tively over time. In 2005, civil part ner ships became avail able to same-sex couples. The same year, adop tion became avail able to same-sex cou ples and sin gle indi vid u als in England andWales . Sexual ori en ta tion was incor po rated into anti dis crim i na tion laws in 2007 and 2010. In 2009, ARTs and the pos si bil ity of hav ing two moth ers on a birth cer tifi cate became avail able. In 2014, same-sex mar riage was legal ized in England, Wales, and Scotland . Attitudes toward sex ual diver sity are rel a tively pos i tive in the UK com pared with other countries . We com pare LGBs born before 1965 to LGBs born between 1965 and 1979. The older cohort reached age 40 before sig nifi cant legal changes took place. The youn ger cohort saw these changes unfold dur ing early or mid-adult hood and could, to some extent, have taken advan tage of the chang ing pos si bil i ties to form unions and tran sition into par ent hood. Younger cohorts born after 1979 are excluded from our anal y sis because they have yet to com plete their prime fam ily for ma tion years, albeit being the cohorts to expe ri ence the most exten sive range of pos si bil i ties yet. Nonetheless, our anal y sis pro vi des the first bench marks of fam ily tra jec to ries for LGB indi vid u als. --- Methods --- Data and Sample We use the Understanding Society sur vey data from 2009 to 2019. UKHLS is a rep resen ta tive house hold panel sur vey of the UK pop u la tion and is one of the very few large-scale sur veys to col lect infor ma tion about sex u al ity as well as part ner ship and par ent hood his to ries. These unique fea tures allow us to iden tify LGB indi vid u als on the basis of their self-reported sex ual iden tity instead of infer ring it from the gen der of their part ner and to recon struct their his to ries of coresidence with part ners and chil dren. We include respon dents who were pres ent in at least one wave in which com plete ret ro spec tive his to ries were col lected 2 as well as one of the waves in which sex ual iden tity was reported on .3 Subsequently, we exclude all indi vid u als who had not reached age 40 by the date of the last inter view, as our anal y sis relies on ana lyz ing com plete partnering and par ent hood histo ries. We also exclude cases with more than six years of miss ing part ner ship and parent hood his to ries or miss ing infor ma tion about age and sex u al ity. The main ana lytic sam ple includes 455 LGB indi vid u als observed between ages 18 and 40 and born before 1979 . --- Measures --- Sexual Identity The Understanding Society sur vey asked respon dents, "Which of the fol low ing options best describes how you think of your self?" The answer options were het erosex ual or straight, gay or les bian, bisex ual, other, pre fer not to say, and don't know. Sexual iden tity is a time-vary ing char ac ter is tic but is mea sured only at two points in time in our data. Our anal y sis, there fore, doc u ments dif fer ences in fam ily tra jec to ries based on sex ual iden tity mea sured in the last wave with nonmissing infor ma tion. 4 We restrict the anal y sis to two mutu ally exclu sive groups: bisex ual and gay/ les bian indi vid u als.5 Gender/sex was col lected with the ques tion, "And you are male/female?" thus pre vent ing us from con sid er ing other gen der iden ti ties or from com par ing cis-and trans gen der per sons. --- Partnerships Union for ma tion, dis so lu tion, and part ner ship spells are based on self-reported coresidence with a part ner of any gen der. 6 We do not have infor ma tion about non res ident part ner ships. In Waves 1 and 6, respon dents were asked to ret ro spec tively report the starting and end ing dates of up to 12 cores i den tial rela tion ships that lasted at least three months. This infor ma tion Family Formation Trajectories and LGB Cohort Change was updated and har mo nized with pro spec tive infor ma tion col lected annu ally in the sub se quent waves . We use these ret ro spec tive his to ries on coresi dent part ners to con struct a per son-month file with a dummy var i able indi cat ing whether, in that month, the per son coresided with a part ner. 7Parenthood is recorded from reports about coresidence with chil dren youn ger than 18. Respondents were asked in Waves 1 and 6 whether they were ever the par ent of an adopted, bio log i cal, or step child, and this infor ma tion is updated in each wave. For bio log i cal chil dren, the sur vey col lects the year of birth and the year the respon dent last lived with the child. For adopted and non bi o log i cal chil dren, the sur vey col lects the year the respon dent started and stopped liv ing with the child . We assume respon dents started liv ing with bio log i cal chil dren in the year of birth and cre ate a dummy var i able of coresidence with a child in each year of the per son-year file accord ingly. Focusing on coresidence allows us to cap ture spells of sin gle par ent hood and res i dence with non bi o log i cal chil dren. However, it does pose some lim i ta tions, namely, that it does not account for non res i dent chil dren. In an addi tional anal y sis, we show that par ent hood trends are sim i lar when also con sid er ing non res i dent bio log i cal chil dren . More gen er ally, we acknowl edge that our fam ily states, that is, coresidence with part ners and chil dren, do not cap ture a wider range of fam ily roles and struc tures in the LGB pop u la tion. Given sam ple size lim i ta tions, our fam ily sequences can not differ en ti ate between mar riage and cohab i ta tion for those partnered or between bio log ical and non bi o log i cal chil dren. We pro vide the prev a lence of ever hav ing res i dent or non res i dent chil dren as part of the descrip tive sta tis tics to offer a bet ter char ac ter iza tion of fam ily tra jec to ries among sex ual minor i ties. We keep indi vid uals with miss ing infor ma tion on partnering or par ent hood for part of the obser va tion period and include "miss ing" as an addi tional state in their fam ily tra jec to ries.8 --- Analytic Approach We use sequence anal y sis, in which a per son's fam ily tra jec tory is con ceived as a suc ces sion of fam ily states dur ing the prime fam ily for ma tion period. In our analy sis, each respon dent's sequence con sists of a series of monthly or yearly states of coresidence with part ners and chil dren starting from age 18. Four dif fer ent fam ily states are observed: sin gle with out cores i dent chil dren, sin gle with coresi dent chil dren, partnered with out cores i dent chil dren, and partnered with cores i dent chil dren. Our oper a tional defi ni tion of a fam ily sequence enables us to assess mul ti ple part ner ship epi sodes , the tran si tion into par ent hood, and their tim ing and sequenc ing within a per son's fam ily tra jec tory. Figure 1 shows three fic tional sequences of fam ily states over a 10-year period. In this exam ple, S1 depicts a tra jec tory with no fam ily tran si tions, wherein a per son does not live with chil dren or a part ner through out the obser va tion period; S2 fea tures a transi tion to sin gle par ent hood in the sixth obser va tion; and S3 fea tures the start of a part nership in the sec ond obser va tion, and the pres ence of a child since the seventh obser va tion. On the basis of these sequences, we gen er ate a typol ogy of under ly ing fam ily tra jec to ries to answer the first research ques tion.9 First, we com pare each pair of sequences in our sam ple and cal cu late their sim i lar ity or opti mal matching dis tance.10 These pairwise OM dis tances are based on cri te ria that account for differ ences in tim ing, sequenc ing, and dura tion of the dif fer ent states observed across sequences . Second, we use hier ar chi cal clus ter anal ysis with the Ward link to clus ter sim i lar sequences into a typol ogy of tra jec to ries or pro files. We chose a five-clus ter solu tion among the avail able solu tions given by the clus ter anal y sis because of its empir i cal fit to the data at hand and because it ren ders a set of the o ret i cally mean ing ful path ways . Even though sequences that belong to the same group are not iden ti cal, they con form to a gen eral tra jectory pat tern, allowing us to bet ter cat e go rize fam ily tra jec to ries than what would be obtained by using sim ple indi ca tors of the prev a lence of events. We explore cohort change through mul ti no mial regres sion anal y sis predicting clus ter mem ber ship by birth cohort. We mea sure the birth cohort as a dummy var i able that com pares LGBs born before 1965 and between 1965 and 1979. We con trol for gen der, sex ual iden tity, edu ca tion, race-eth nic ity, and fam ily struc ture at age 16. We also pres ent predicted prob a bil i ties of clus ter mem ber ship by gen der and sex u al ity while con trol ling for the same covariates. In a sup ple men tary anal y sis, we cal cu late com pos ite mea sures of changes in the com plex ity and diver sity of sequences used by pre vi ous empir i cal research . We pres ent results only for LGBs in the main text because they are the focus of our anal y sis. We refer to het ero sex u als when it is use ful for com par a tive con text and pres ent results for het ero sex u als in the online Supplementary Materials. --- Results --- Descriptive Family Formation Statistics Figure 2 shows the dis tri bu tion of the four fam ily states at every age between 18 and 40 for LGB women and men. The graph shows that singlehood, that is, not liv ing with a part ner or chil dren, is a prominent state for LGBs across all ages in our sam ple. About 40% of LGB sam ple mem bers were not liv ing with a part ner or chil dren at age 40, com pared with about 10% of the het ero sex ual sam ple mem bers . Being partnered with chil dren is also a prominent state of LGBs' life courses, but being in a union with out chil dren is more com mon . Moreover, hav ing chil dren within part ner ships occurs late in the life course. Finally, sin gle par ent hood is the least com mon state. When it does occur, it is later in the life course. --- Family Formation Trajectory Typology Figure 3 pres ents results for the clus ter anal y sis that pro duced five clus ters of fam ily for ma tion tra jec to ries among LGB women and men in the sam ple: mostly sin gle, no chil dren ; mostly partnered, no chil dren ; early partnering and par ent hood ; delayed partnering and par ent hood ; and sin gle par ent hood . Two thirds of the sam ple belonged to Clusters 1 and 2, which are char ac ter ized by not resid ing with chil dren but dif fer in partnering pat terns. The larg est clus ter in the sam ple is Cluster 1, with 38%; it is char ac ter ized by sta ble singlehood or short partner ship spells, mostly between ages 24 and 34. A sig nifi cant share of the tra jec to ries in this clus ter had a part ner ship at least once by age 40, but these part ner ships were short-lived . Twenty-eight per cent of the sam ple belonged to Cluster 2, which is char ac ter ized by long part ner ship spells after age 30 or ear lier partnering and repartnering through out the observed life course. A third of the sam ple was dis trib uted across Clusters 3, 4, and 5, char ac ter ized by cores i dent par ent hood but dif fer ing partnering pat terns. Nineteen per cent belonged to Cluster 3, char ac ter ized by early partnering and par ent hood tran si tions by the mid-20s. Another 11% belonged to Cluster 4, with delayed partnering and par ent hood tran si tions. Specifically, this clus ter includes fam ily for ma tion tra jec to ries that start with early union for ma tion and later par ent hood tran si tion or delayed union for mation with a tran si tion to par ent hood soon after. Finally, a small group of LGB sam ple mem bers belonged to Cluster 5, char ac ter ized by long sin gle par ent hood spells. Because of the small sam ple size, we com bined Clusters 3, 4, and 5 into one group of par ent hood clus ters for the rest of the anal y sis. What sociodemographic char ac ter is tics are asso ci ated with belong ing to each cluster? Table 1 shows bivar i ate descrip tive sociodemographic char ac ter is tics for each of the three clus ters: mostly singlehood , mostly part ner ship , and cores i dent par ent hood . Overall, women and bisex ual peo ple were under rep re sented in the singlehood clus ter but over rep re sented in the par ent hood clus ters. White LGBs were rel a tively less likely to belong to a par ent hood clus ter. LGBs whose par ents were not together when they were 16 years old were more likely to be in the singlehood clus ter, whereas LGBs whose par ents were together were more likely to be in the part ner ship cluster. Overall, there are no nota ble dif fer ences in edu ca tion across the clus ters in our sam ple. Table 1 also shows that more than half of the peo ple in the singlehood clus ter have had a cores i dent part ner, but as Figure 3 showed, most of these part ner ships were short-lived. On aver age, the num ber of part ners was sim i lar among LGBs in the part ner ship and par ent hood clus ters . More than half of the peo ple in the par ent hood clus ters have ever had non res i dent minor chil dren, in con trast to 6% and 4% among LGBs in the singlehood and part ner ship clus ters, respec tively. This strik ing con trast implies that a very small pro por tion of LGBs in these clus ters were ever par ents. --- Cohort Change in Family Formation Trajectories --- Cluster Membership Across Cohorts We first exam ine how the two LGB cohorts dis trib ute across clus ters to explore cohort change in fam ily tra jec to ries. Table 1 shows that LGBs born between 1965 and 1979 were more likely to belong to the part ner ship clus ter than LGBs born before 1965. Table 2 shows results for a mul ti no mial regres sion anal y sis predicting clus ter member ship. These results con firm that the youn ger LGB cohort was more likely than the older cohort to expe ri ence part ner ship tra jec to ries than par ent hood or singlehood tra jec to ries. Figure 4 gives bet ter insight into abso lute changes in the prev a lence of fam ily tra jec to ries and how these changes dif fered by gen der and sex ual iden tity . We observe that both gay men and les bian women have become con sid er ably more likely to fol low part ner ship trajec to ries across cohorts. This has come at the expense of tra jec to ries char ac ter ized by par ent hood and singlehood, although few gay men followed par ent hood tra jec to ries in the older cohort. Among gay men, singlehood was clearly the most com mon fam ily tra jec tory type among the older cohort , but part ner ship tra jec to ries were most com mon among the youn ger cohort. For les bian women, both par ent hood and singlehood tra jec to ries were prev a lent among the older cohort , yet part ner ship tra jec to ries became the most com mon tra jec tory among the youn ger cohort . These results align with the expec ta tion that part ner ship tra jec to ries will become more prev a lent across cohorts as the oppor tu ni ties to meet same-sex part ners increase and the stigma and dis crim i na tion decline over time. However, our results regard ing par ent hood sug gest that only rel a tively small shares of the cohorts stud ied have taken Family Formation Trajectories and LGB Cohort Change --- Women Men --- Parenthood --- S inglehood Partnership --- Predicted Probability Fig. 4 Predicted probability for cluster membership by birth cohort, gender, and sexuality. Multinomial regression analyses include interactions between sexual identity, gender, and cohort, and control for race-ethnicity, education, and family structure at 16. Source: Understanding Society . advan tage of increas ing access to alter na tive routes into par ent hood, such as ARTs and adop tion. Results for bisex ual women and men show dif fer ent pat terns of change than those observed among gay/les bian indi vid u als. Although the sam ple size calls for cau tion in inter pre ta tion, the descrip tive pat terns among bisex u als align with gen eral nar ratives of demo graphic change. Bisexuals from the youn ger birth cohort in our sam ple have expe ri enced a decline in par ent hood and part ner ship tra jec to ries but an increase in singlehood tra jec to ries. This change was more sub stan tial among bisex ual men than bisex ual women. --- Additional Analysis: Measures of Diversity and Complexity Empirical stud ies on demo graphic change often pro vide sum mary mea sures of how event ful fam ily tra jec to ries are and how diverse tra jec to ries are across indi vid u als to quan tify other forms of change in life tra jec to ries. Figures 5 and6 illus trate such mea sures for our sam ple, and we briefly sum ma rize the main takeaways here . Overall, we find that fam ily tra jec to ries of les bian women and bisex ual men became less diverse over time, mean ing their fam ily path ways are becom ing more sim i lar to each group's rep re sen ta tive tra jec tory . This result does not imply that all les bian women and bisex ual men fol low the same fam ily for ma tion path ways, nor that the sub jec tive expe ri ence of sim i lar path ways is alike. However, regarding cores i dent part ners and chil dren, the sequences of youn ger les bian women and bisex ual men in our sam ple were more likely to be sim i lar to one another than the sequences of their peers born before 1965. Results for com plex ity-that is, the num ber of events peo ple expe ri ence-show a rel a tively uni form increase across cohorts for all groups . This prob a bly reflects the replace ment of low-com plex ity trajec to ries, such as those char ac ter ized by singlehood, with tra jec to ries char ac ter ized by part ner ship. --- Discussion Research on demo graphic change has mostly overlooked the fam ily tra jec to ries of sex ual minor i ties . Existing empir i cal research has yet to doc u ment how part ner ships, par ent hood, and singlehood evolve over the life course of sex ual minor i ties. Similarly, existing the o ret i cal nar ra tives of demo graphic change focus on the gen eral pop u la tion and can, at best, explain why same-sex unions have become more vis i ble in recent decades. These nar ra tives pro vide lit tle under stand ing of demo graphic change among sex ual minori ties in a con text where pos si bil i ties to pur sue part ner ships, marry, and have chil dren within same-sex rela tion ships have expanded. In this arti cle, we pro vide what is, to our knowl edge, the first quan ti ta tive descrip tion of fam ily life courses among LGB per sons and the first assess ment of how fam ily tra jec to ries changed across two cohorts of LGBs in the UK. Our results pro vided sev eral novel empir i cal obser va tions. First, we iden ti fied five dis tinct pro files of fam ily tra jec to ries between ages 18 and 40. Trajectories char ac ter ized by singlehood or part ner ship with out chil dren were the most prev a lent. Previous cross-sec tional research in the United States dem on strated that LGBs were less likely to have a part ner at a given time than het ero sex u als, except for les bian women . We add to this body of work that a con sid erable share of LGBs born before 1979 did not coreside with a part ner before age 40 in the UK. For some groups, these shares were sub stan tial. For instance, 60% of gay men born before 1965 followed a tra jectory that mainly consisted of singlehood. This pat tern means that empir i cal stud ies based on same-sex cou ples omit a large part of the LGB pop u la tion, espe cially among older cohorts of gay men. We also found some dif fer ences com pared with stud ies from the United States . For exam ple, tra jec to ries char ac ter ized by singlehood were more prev a lent among les bian women than among bisex ual women in the UK. We also found that gay men were more likely to fol low singlehood tra jec to ries than were bisex ual men. Both results dif fer from pat terns observed in the United States, where bisex ual indi vid u als were more likely to be sin gle than were gay/les bian indi vidu als . This diver gence could be because we take a life course rather than a cross-sec tional approach. This diver gence high lights the impor tance of a life course per spec tive in under stand ing fam ily for ma tion pro cesses among LGBs.11 However, we also observed that singlehood tra jec to ries became more com mon among bisex u als across cohorts, an issue we elab o rate on later. Our sec ond novel obser va tion is the increas ing prev a lence of tra jec to ries char acter ized by cores i den tial part ner ships among the youn ger gay and les bian indi vid u als in our sam ple . This trend was expected amid the increas ing avail abil ity and vis i bil ity of poten tial LGB part ners, the decreas ing stigma and dis crim i na tion toward same-sex cou ples , and LGBs' expressed inter est in hav ing a part ner . Which mech a nism spe cifi cally drives this trend is a ques tion for future research. However, if the under ly ing mech a nism is the decreas ing stigma and insti tu tional dis crim i na tion, we expect that singlehood tra jec to ries will con tinue to be highly prev a lent among gay/les bian individ u als in con texts where stigma and dis crim i na tion are higher than in the UK. Nonetheless, our results for gay and les bian indi vid u als con tra dict pre dic tions derived from the o ries of demo graphic change. The quest for inde pen dence is cen tral to the sec ond demo graphic tran si tion frame work, which should man i fest in more singlehood trajec to ries across cohorts rather than a decrease. Therefore, to under stand demo graphic change among LGBs, demog ra phers should also con sider the diver sity of fam ily forma tion expe ri ences within soci e ties. Family Formation Trajectories and LGB Cohort Change The results for bisex u als, how ever, were con sis tent with the trend in the gen eral pop u la tion. Bisexuals in the youn ger birth cohort, espe cially bisex ual men, were more likely to expe ri ence a singlehood tra jec tory than the older birth cohort. Barriers to same-sex partnering restricted options for both bisex ual and gay/les bian peo ple. However, the avail abil ity of dif fer ent-sex partnering could explain why bisex u als in the older cohort were more likely to fol low a partnering or par ent hood tra jectory than gay and les bian peo ple. As some bar ri ers to same-sex partnering have been reduced, youn ger bisex ual men could opt out of dif fer ent-sex rela tion ships as a form of inde pen dence. However, the decline in bisex u als' part ner ship tra jec to ries could also reflect vari a tion in stig mas that dif fer ent sex ual minor ity groups expe ri ence amid the sec ond demo graphic tran si tion. For instance, bisex u als expe ri ence dou ble era sure by both het ero sex u als and other sex ual minor i ties, which reduces their partnering oppor tu nities . This biphobia impacts bisex ual men more than bisex ual women, which could explain the more pro nounced increase in singlehood tra jec to ries among bisex ual men. In other words, although bisex u als' trends align with trends in the gen eral pop u la tion, they could be driven by dif fer ent soci e tal forces. Hence, our results under score the impor tance of under stand ing the diver sity of fam ily tra jec tories across dif fer ent sex ual minor ity groups in future research. A third novel empir i cal obser va tion was that a nonnegligible but rel a tively small share of UK LGBs expe ri enced life courses that included prolonged spells of coresidence with chil dren. This share was smallest among gay men and larg est among bisex ual men and women. Parenthood tra jec to ries became slightly less com mon across cohorts for gay men, bisex ual men, and bisex ual women, but a pro nounced decline was observed among les bian women. This trend aligns with evi dence of a decline in same-sex cou ples' like li hood to coreside with chil dren born in pre vi ous dif fer ent-sex rela tion ships . This expla na tion is plau si ble because the most nota ble decline in our data was among les bian women, who are more likely to live with chil dren from past dif fer ent-sex rela tion ships than are gay men. 12Nonetheless, the decline in par ent hood tra jec to ries is par a dox i cal amid rel a tive expan sion in access to planned path ways to par ent hood for same-sex cou ples, including adop tion and ARTs. What could explain this par a dox? First, youn ger LGBs might be more likely to reject heteronormative fam ily tra jec to ries . Second, planned path ways to par ent hood might be out of reach for youn ger LGBs. Despite sig nifi cant prog ress in ARTs, the pro ce dures are costly . Moreover, sub sidy eli gi bil ity is determined by heteronormative cri te ria, such as not con ceiv ing after hav ing had unprotected inter course for at least a year. Consequently, women in same-sex cou ples are often denied subsidies unless they try to con ceive through pri vately funded insem i nation donors or have under gone an infer til ity test . Similarly, although adop tion became avail able to same-sex cou ples in the 2000s, rates are still low and same-sex cou ples are over rep re sented among the cou ples who adopt hard-to-place chil dren in the UK . Future research should inves ti gate cohort change in LGBs' path ways to par ent hood across more and less favor able contexts than the UK. The decline in par ent hood tra jec to ries also raises ques tions about LGBs' future care needs as they age. Sexual and gen der minor i ties have his tor i cally relied on chosen fam i lies and their com mu ni ties for care sup port owing to indi vid ual and insti tutional dis crim i na tion . However, amid pop u la tion aging and increas ing demand for fam ily-pro vided care, an addi tional unmet need for care could emerge among LGBs of the youn ger cohorts if they con tinue to be less likely to experi ence par ent hood tra jec to ries. Future research should inves ti gate how fam ily-related changes affect dif fer ent sex ual minor ity groups' access to care sup port over time. Besides describ ing fam ily tra jec tory types, we also explored the com plex ity and diver sity of LGBs' fam ily tra jec to ries, which is another pil lar of demo graphic change research. Previous research argued that fam ily tra jec to ries have become more eventful and diverse among the gen eral pop u la tion . However, our results were mixed. On the one hand, LGBs' fam ily tra jec to ries became more com plex across cohorts owing to a decrease in singlehood tra jec to ries. On the other hand, the trend in diver sity, that is, the extent to which peo ple's tra jec to ries are dif fer ent from a sin gle dom i nant tra jec tory, was dif fer ent across groups. Lesbian women's tra jec to ries became slightly less diverse, while gay men's tra jec to ries became slightly more diverse across cohorts. Both groups expe ri enced increases in part ner ship tra jec to ries that increased diver sity, and both groups expe ri enced declines in par ent hood tra jec to ries that decreased diver sity. However, for les bian women, the decline in par ent hood tra jec to ries was much more sub stan tial, lead ing to an over all decline in diver sity. Our study has sev eral lim i ta tions that offer direc tions for future research. First, our small sam ple lim ited our abil ity to break down trends by edu ca tion, race, eth nic ity, or more detailed cohorts. In addi tion, our LGB sam ple could only iden tify as women or men, and sex ual iden ti ties beyond LGB were not avail able as answer options. Nonetheless, using sex ual iden tity mea sures is one step for ward beyond existing prac tices in fam ily schol ar ship that focus on same-sex cou ples. The sex ual iden tity mea sure allowed us, for instance, to uncover dif fer ences between bisex ual and gay/les bian indi vid u als and to look at LGB indi vid u als who are sin gle or sin gle par ents. Second, our con cep tu al i za tion of fam ily for ma tion is lim ited to coresidence with part ners and chil dren, representing a heteronormative defi ni tion of fam ily life . Our approach does not include non res i dent part ners and other forms of part ner ships and rela tion ships that could be more prominent in LGBs' per cep tion of a fam ily . However, we dem on strated a cohort change within the scope of cores i den tial famil ial rela tion ships. These cores iden tial trends are essen tial to doc u ment amid existing social and insti tu tional bar ri ers. Moreover, our sequence anal y sis allowed us to focus on the diver sity and com plex ity of fam ily tra jec to ries amid a small sam ple. Future stud ies should inves ti gate fur ther the fam ily dynam ics of sex ual minor i ties over the life course, namely, the life courses of youn ger LGB indi vid u als born after 1980, who are in the pro cess of fam ily for mation within a rap idly chang ing con text. Family Formation Trajectories and LGB Cohort Change Despite these lim i ta tions, we expanded existing research about fam ily for ma tion among sex ual minor i ties by tak ing a novel life course approach that cen ters on the dynamic, com plex, and diverse ways that part ner ship and par ent hood evolve over the life course of LGB women and men. This study con trib uted by documenting cohort change in fam ily for ma tion among LGBs and how these changes are prolonged and unequal despite legal expan sions and social changes. Our results have con se quences for gen eral nar ra tives of demo graphic change over time. Whereas chang ing social norms have made it eas ier for sig nifi cant parts of the gen eral pop u la tion to "devi ate" from the nor ma tive tra jec tory of early fam ily for ma tion, the same social changes might have facil i tated cores i den tial partnering tra jec to ries for the LGB pop u la tion. We also expected par ent hood tra jec to ries to become more com mon across cohorts for the LGB pop u la tion but found the oppo site for the cohorts stud ied here. This seems to imply that sex ual minor i ties con tinue to face obsta cles to access alter na tive routes into par ent hood, such as adop tion and ARTs. These results illus trate that demographic change man i fests dif fer ently for dif fer ent groups within a soci ety. Hence, existing nar ra tives of fam ily-related demo graphic change should explic itly con sider how dif fer ent groups within soci e ties, such as sex ual minor i ties, expe ri ence and react to social and demo graphic changes. ■
Narratives of demo graphic shifts over look how soci e tal changes shape the fam ily tra jec to ries of sex ual minor i ties. Using sequence anal y sis, we describe how partnering and par ent hood evolve over the life course of les bian, gay, and bisex ual (LGB) women and men in the United Kingdom (N = 455) and how the types of these fam ily tra jec to ries changed across two birth cohorts (born before 1965 and in 1965-1979). We find five dis tinct tra jec to ries between ages 18 and 40, wherein two thirds of the sam ple belonged to a fam ily tra jec tory that did not involve liv ing with chil dren. Partnershipcen tered tra jec to ries became more com mon across cohorts, and this increase came at the expense of tra jec to ries char ac ter ized by singlehood among gay men and lesbian women. However, par ent hood tra jec to ries became less com mon among all LGB groups. Furthermore, fam ily tra jec to ries became more com plex across cohorts, including more tran si tions, which coin cides with trends in the gen eral pop u la tion. Yet we also find that fam ily tra jec to ries became less diverse among les bian women and bisex ual men, in con trast to the trend among gay men and the gen eral pop u la tion. The results dem on strate the dynamic, com plex, and diverse nature of LGB indi vid u als' fam ily lives and why existing nar ra tives of fam ily-related demo graphic change should explic itly con sider sex ual minor i ties in demo graphic nar ra tives.
INTRODUCTION Changing demographics in the United States have been marked by several significant trends over the past few decades. One prominent trend is the increasing racial and ethnic diversity of the population. According to the U.S. Census Bureau, in 2020, the non-Hispanic White population accounted for 57.8% of the total population, a decline from previous years . This shift reflects a growing Hispanic and Asian population due to immigration and higher birth rates among these groups. As an example, Pew Research Center reported that the Asian population in the United States increased by 81% between 2000 and 2019, and the Hispanic population increased by 72% during the same period. Another significant demographic change in the United States is the aging of the population. The proportion of elderly individuals, aged 65 and older, has been steadily increasing. In 2020, the U.S. Census Bureau reported that 16.5% of the U.S. population was aged 65 and older . This trend is primarily driven by the aging Baby Boomer generation, which is entering retirement age. As a result, there are implications for healthcare, social security, and the labor force. Furthermore, there has been a shift in family structures and household compositions. The traditional nuclear family model is evolving, with an increase in single-parent households, cohabiting couples, and households with non-relatives. This shift is partly attributed to changing societal norms and economic factors . For example, Brown and Lin's study on family structure trends in the U.S. found that the percentage of children living in two-parent households decreased from 88% in 1960 to 69% in 2008. Economic disparities and income inequality are also significant demographic issues in the United States. The wealth gap between the top earners and the rest of the population has widened over the years. Piketty & Saez revealed that the income share of the top 1% of earners in the United States increased from around 10% in the 1970s to over 20% in the early 2000s. This economic inequality has profound implications for access to education, healthcare, and opportunities for upward mobility. Geographic population shifts are noteworthy in U.S. demographics. There has been a movement of people from rural to urban areas, resulting in metropolitan growth and the decline of rural populations . This trend is tied to economic opportunities, job availability, and access to services. For instance, Johnson and Lichter's research found that rural counties in the United States experienced a decline in population growth due to out-migration and lower birth rates compared to urban areas. The changing demographics of the United Kingdom have been a subject of significant research interest due to their profound implications for various aspects of society, including economics, healthcare, and public policy. According to Coleman , the UK has experienced a notable shift in its demographic structure over recent decades. This transformation is primarily characterized by an aging population, declining birth rates, and increasing life expectancy. For instance, in 2021, the Office for National Statistics reported that the median age in the UK had risen to 40.3 years, up from 35.9 years in 1981, reflecting a trend towards an older population. One of the key demographic trends in the UK is the aging population. This phenomenon is driven by a combination of factors, including lower birth rates and improved healthcare leading to longer life expectancy. For instance, according to ONS data , the proportion of people aged 65 and over in the UK increased from 15.9% in 1985 to 18.3% in 2020. This demographic shift has significant implications for healthcare services, social security systems, and pension schemes. As the population ages, there is a growing need for healthcare resources and eldercare facilities. The UK's demographics are also undergoing changes in terms of ethnic diversity and immigration patterns. Simpson, Leckie, Abrams & Tuffin highlights that the UK's ethnic minority population has been steadily growing, driven by immigration and natural births among ethnic minority groups. For example, data from the ONS reveals that in 2020, 14.9% of the UK's population identified as non-White, compared to 9.1% in 2001. This shift has led to increased diversity in the workforce, cultural enrichment, and challenges related to multiculturalism and social integration. Urbanization is another significant aspect of changing demographics in the UK. The migration of people from rural to urban areas has resulted in the concentration of population in major cities and metropolitan regions. According to Champion , this trend has implications for the distribution of resources, housing, transportation, and environmental sustainability. For instance, London, as the capital city, has experienced substantial population growth, which has put pressure on housing availability and transportation infrastructure. The changing demographics of the United Kingdom, including an aging population, ethnic diversity, and urbanization, reflect ongoing transformations that have far-reaching consequences for various sectors of society. Policymakers, researchers, and institutions need to address the challenges and opportunities associated with these demographic shifts. Japan is experiencing significant shifts in its demographic landscape, characterized by a rapidly aging population and a declining birth rate. These changes are evident in recent statistics, such as data from the United Nations Population Division , which reported that in 2019, Japan's population was approximately 126.5 million, with 28.1% aged 65 and over and a low fertility rate of 1.42 children per woman. This trend of an aging society and declining birth rates presents unique challenges for Japan's economy, healthcare system, and social structure. This paragraph introduces the key demographic trends in Japan. Japan's aging population is a significant demographic trend. The country has one of the highest life expectancies globally, with the World Bank reporting an average life expectancy of approximately 84 years in 2019. The increasing life expectancy, coupled with a declining birth rate, has led to a rapidly growing elderly population. This demographic shift has profound implications for the Japanese workforce, social security systems, and healthcare services, with a growing demand for elderly care and a shrinking working-age population. Japan's declining birth rates are a key driver of its changing demographics. Data from the Ministry of Internal Affairs and Communications reveals that the number of births in Japan hit a historic low of 872,683 in 2019. This decline in births can be attributed to various factors, including changing social norms, economic pressures, and delayed marriages. As a result, Japan's family structure is evolving, with smaller family sizes and an increased proportion of elderly citizens, impacting intergenerational relationships and care responsibilities. Japan's demographic challenges have also prompted discussions about immigration as a potential solution. While Japan has historically had strict immigration policies, there is increasing recognition of the need for immigrant labor to offset labor shortages in various sectors. However, Japan's immigration policies remain conservative, and it is essential to balance demographic concerns with cultural and societal factors. According to data from the Japan Immigration Services Agency , the foreign-born population in Japan was approximately 2.9 million in 2019, constituting about 2.3% of the total population. Japan's changing demographics, marked by an aging population, declining birth rates, and evolving family structures, are significant challenges that have far-reaching implications for the country's economy, healthcare, and social fabric. These demographic trends are likely to persist in the coming years, presenting both opportunities and challenges for policymakers and society at large. Addressing these challenges will require a multi-faceted approach, including potential reforms in immigration policies, investments in elderly care infrastructure, and efforts to support families and encourage higher birth rates. Sub-Saharan Africa has been experiencing significant demographic changes in recent decades. One prominent trend is the region's population growth. According to Smith, Franklin & Bilsborrow , the population of Sub-Saharan Africa has been growing at an unprecedented rate. Between 2010 and 2015, the population increased by approximately 2.6% annually, with projections indicating that it will continue to grow substantially. This population growth is driven by factors such as high birth rates and declining mortality rates. One of the key demographic characteristics of Sub-Saharan Africa is its youthful population. According to Lloyd, Cebotari & Becker , a substantial percentage of the population in many Sub-Saharan countries is under the age of 25. For example, in Nigeria, approximately 43% of the population is under 15 years old . This youth bulge has implications for education, employment, and social services. Urbanization is another significant demographic trend in Sub-Saharan Africa. Grant & Yelvington highlighted the rapid urbanization occurring in the region. Cities such as Lagos in Nigeria and Nairobi in Kenya have experienced substantial population growth due to rural-to-urban migration. This trend poses challenges related to infrastructure, housing, and access to basic services. Sub-Saharan Africa is known for its rich ethnic and cultural diversity. This diversity is reflected in the population composition of countries like Ethiopia, where over 80 distinct ethnic groups coexist . Ethnicity plays a significant role in the region's politics, social dynamics, and identity. Understanding the demographic distribution of ethnic groups is crucial for addressing issues related to governance and conflict. Life expectancy and health outcomes have improved in many Sub-Saharan African countries. For instance, Wang, Tesfaye, Ramana & Chekagn noted that life expectancy has increased in the region, thanks to advancements in healthcare and a decline in the prevalence of diseases like HIV/AIDS. However, there are still significant disparities within and between countries in terms of healthcare access and outcomes. Nigeria, located in West Africa, has experienced significant changes in its demographics over the years. These demographic shifts are characterized by notable trends in population growth, age structure, urbanization, and ethnic diversity. According to Ukwuani &Suchindran , Nigeria's population has been steadily increasing, making it one of the most populous countries in Africa and the world. The population of Nigeria stood at approximately 140 million in 2006, and by 2019, it had surpassed 200 million, indicating a substantial increase in just over a decade. This rapid population growth has had profound implications for the country's social, economic, and political landscape. One of the key demographic trends in Nigeria is its age structure, with a significant proportion of the population being young. The youth population, typically defined as individuals aged 15 to 34 years, is particularly prominent. As noted by the National Population Commission of Nigeria , approximately 42% of Nigeria's population falls within this age group. This youthful demographic has both opportunities and challenges. On one hand, it presents a potential demographic dividend, where a young and productive workforce can drive economic growth. On the other hand, it also places pressure on education, healthcare, and employment systems to accommodate the needs of a growing youth cohort. Nigeria is known for its rich ethnic diversity, with over 250 different ethnic groups and languages. Urbanization has played a significant role in reshaping the country's demographics. According to a report by UN-Habitat , Nigeria has experienced rapid urbanization, with a growing percentage of its population residing in cities and urban areas. Lagos, for example, is one of the fastest-growing megacities in the world. This urbanization trend is driven by factors such as rural-to-urban migration, economic opportunities in cities, and the allure of urban amenities. The urbanization process has led to changes in living patterns, economic activities, and social dynamics, contributing to the evolving demographics of Nigeria, where urban centers are becoming increasingly diverse and cosmopolitan. Migration patterns are complex and dynamic processes that involve the movement of people from one place to another, often influenced by various factors such as economic opportunities, social conditions, political changes, and environmental conditions. These migration patterns have a significant impact on changing demographics within regions and countries. In this conceptual analysis, we will explore the various dimensions of migration patterns and their interconnectedness with changing demographics. Migration patterns encompass different types of migration, including internal migration within a country and international migration between countries . International migration, for example, involves the movement of people across national borders, which can result in shifts in population demographics as individuals from different cultural backgrounds and age groups settle in new regions . Demographic changes resulting from migration patterns are often seen in urbanization processes. As people move from rural to urban areas in search of better economic prospects and improved living conditions, cities experience population growth and a change in their age structure . This shift can lead to increased urbanization rates and a higher proportion of working-age individuals in urban areas. Migration patterns also play a crucial role in shaping the ethnocultural composition of regions and countries. The movement of people from diverse cultural backgrounds can contribute to the multiculturalism of a region . This demographic diversity can have both positive and challenging implications for social cohesion, cultural exchange, and policy development. Furthermore, migration patterns are closely related to fertility rates and family structures. Migrants may have different family sizes and childbearing behaviors compared to the host population, leading to changes in the overall fertility rates of a region . The interplay between migration and fertility has implications for population growth and age distribution. Economic factors are often at the core of migration patterns. People may migrate in search of job opportunities, higher wages, and improved economic prospects . These economic motivations can result in the concentration of specific industries or sectors in certain regions, affecting the labor force's skill composition and overall demographics. Migration patterns also influence the spatial distribution of healthcare needs and services. As people move to different regions, the demand for healthcare facilities and services can shift, impacting the availability of healthcare resources in both sending and receiving areas .). This dynamic has implications for healthcare planning and resource allocation. Migration patterns are multifaceted processes that impact changing demographics in various ways. They contribute to urbanization, influence ethnocultural diversity, shape family structures and fertility rates, have economic repercussions, and affect the spatial distribution of healthcare needs. Understanding the intricacies of migration patterns and their demographic consequences is essential for policymakers, urban planners, and researchers to address the challenges and opportunities associated with these dynamic phenomena. --- Statement of the Problem Nigeria has experienced profound shifts in its demographic landscape over recent decades. With its population surpassing 200 million in 2019 , Nigeria faces a unique demographic challenge characterized by rapid population growth, a significant youth bulge, and growing urbanization. While these broad demographic trends are well-documented, there remains a critical gap in understanding the intricate relationship between migration patterns and these demographic changes. This study aims to address this gap by investigating how various forms of migration, such as rural-to-urban migration, international migration, and internal displacement, contribute to the evolving demographics of Nigeria. Furthermore, the study seeks to identify the demographic consequences of these migration patterns, shedding light on the implications for policymakers, urban planners, healthcare providers, and educators who are tasked with meeting the evolving needs of Nigeria's diverse and expanding population. The findings of this study are expected to benefit a wide range of stakeholders. First and foremost, policymakers and government officials in Nigeria will gain valuable insights into the connections between migration and demographic shifts. These insights can inform evidence-based policies related to urban planning, infrastructure development, and social services allocation, ultimately helping the government address the challenges posed by a rapidly growing population. Urban planners and healthcare providers will benefit from a better understanding of how migration patterns influence the spatial distribution of people and the demand for healthcare services in urban centers. Additionally, educators can use this research to adapt their approaches to the changing demographics, ensuring that educational systems are equipped to meet the needs of a youthful and diverse student population. Overall, this study aims to provide actionable knowledge that can guide decision-makers in effectively managing Nigeria's changing demographics in the face of ongoing migration dynamics. --- LITERATURE REVIEW --- Theoretical Review --- Demographic Transition Theory The Demographic Transition Theory, originated by Warren Thompson in the early 20th century, is a fundamental theory in demography that examines the relationship between population growth and socioeconomic development. This theory posits that as societies transition from pre-industrial to industrial stages of development, they undergo predictable shifts in their birth and death rates, leading to changes in population size and age structure. This theory is highly relevant to the study of "Migration Patterns and the Changing Demographics of Nigeria" because it helps to contextualize demographic changes in Nigeria within the broader framework of economic and social development. As Nigeria undergoes urbanization and industrialization, understanding how migration patterns are influencing demographic shifts, such as changes in birth and death rates, is critical for policymakers and researchers aiming to manage these transitions effectively --- Push and Pull Factors Theory The Push and Pull Factors Theory, attributed to the works of Ravenstein in the late 19th century and later developed by multiple scholars, explains migration patterns by analyzing the factors that drive people to leave their place of origin and the factors that attract them to a destination . This theory is highly relevant to the study of migration patterns in Nigeria as it helps elucidate why people move within the country or internationally. For example, rural-to-urban migration in Nigeria can be understood through the lens of push factors like lack of economic opportunities in rural areas and pull factors like the promise of employment and better living conditions in urban centers. Investigating these factors is crucial for policymakers aiming to address issues related to urbanization, population concentration, and resource allocation --- Dependency Theory Dependency Theory, with roots in the works of scholars such as Raúl Prebisch and André Gunder Frank in the mid-20th century, focuses on the global economic system and the relationships between developed and developing countries. This theory posits that underdevelopment in developing nations is often a result of their economic dependence on and exploitation by more developed countries. In the context of "Migration Patterns and the Changing Demographics of Nigeria," the Dependency Theory is relevant for understanding how international migration patterns are influenced by economic disparities between Nigeria and destination countries. Many Nigerians migrate abroad in search of better economic opportunities, often driven by the economic imbalances between Nigeria and more developed nations. This theory helps researchers examine the role of economic factors in shaping migration patterns and their demographic consequences --- Empirical Review Okeke, Ikegwuonu & Nwankwo investigated the educational outcomes and aspirations of Nigerian youth involved in rural-urban migration over a five-year period. The study combined surveys, focus group discussions, and educational performance data analysis to assess the educational consequences of youth migration. The research highlighted the challenges faced by migrating youth in accessing quality education and the need for targeted interventions. The study recommended policy measures to support the educational needs of migrating youth, including skills development programs. Adelekan & Oyedeji analyzed the relationship between urbanization, migration patterns and changing household structures in Nigerian cities. The study employed spatial analysis, household surveys, and in-depth interviews to investigate the impact of migration on household composition in urban areas. The research revealed shifts in household structures, with smaller, more diverse households in urban centers influenced by migration. The study proposed urban planning strategies that consider evolving household structures and demographic needs. Okafor & Nwosu investigated the impact of rural-urban migration on the age structure within urban areas of Nigeria. The researchers employed a mixed-methods approach, combining demographic data analysis with qualitative interviews to assess the dynamics of age distribution in these urban centers. The findings revealed a significant shift in age distribution patterns due to rural-urban migration, with a higher proportion of young adults and a decrease in the elderly population within the urban areas. This demographic transformation has important implications for urban planning, social services, and policymaking in Nigeria. The study suggests that policymakers should design and implement strategies that address the evolving age structure, emphasizing the need for infrastructure and services that cater to the unique needs of the burgeoning young urban population while also ensuring support for the elderly population. Abubakar, Yakubu & Ahmed assessed the demographic consequences of refugee inflows from neighbouring countries in Nigerian border regions. The study combined census data analysis with field surveys in border areas, examining the effects of refugee influx on local population dynamics. The research highlighted the strain on local resources and services due to refugee migration, affecting demographic balances. The study recommended coordinated efforts between government agencies and international organizations to address the demographic challenges posed by refugee inflows. Ojo, Aluko & Salau investigated the gender-specific migration trends in Nigeria and their implications for demographic changes. To achieve this, they employed a comparative analysis approach, drawing on data from various sources, including national surveys and census records. The findings of the study revealed distinct patterns of migration between genders, with men predominantly engaging in rural-to-urban migration, while women exhibited higher rates of rural-to-rural and urbanto-rural migration. These gendered migration patterns were found to have significant consequences on demographic structures in different regions of Nigeria. As a result, the study recommends the development of gender-sensitive policies and programs that address the specific needs and challenges faced by both male and female migrants to foster more inclusive and equitable demographic changes in the country. Ogunbode, Adekunle & Abdullahi examined the internal migration patterns within Nigeria and their impact on demographic changes. The study utilized longitudinal data from national surveys, incorporating statistical analysis and demographic modeling to track migration trends. The research revealed significant shifts in population distribution, with increasing urbanization and regional disparities in demographic profiles. The study recommended policy interventions to address urbanization challenges and ensure equitable demographic development. Ajayi, Osunde & Afolabi examined the impact of international migration on Nigeria's demographics, with a focus on identifying trends and implications. To achieve this, they employed a comprehensive methodology that involved the analysis of demographic data, migration statistics, and historical records. The findings of their research indicated a notable influence of international migration on Nigeria's population dynamics, including shifts in age distribution and gender ratios. Moreover, the study highlighted the potential consequences of these demographic changes, emphasizing the need for policy adjustments to address the challenges and opportunities posed by international migration in Nigeria. The authors recommended that policymakers consider these demographic trends and their implications when formulating strategies for sustainable development and migration management in the country. --- METHODOLOGY The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive's time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. --- FINDINGS This study presented both a contextual and methodological gap. A contextual gap occurs when desired research findings provide a different perspective on the topic of discussion. For instance, Ojo, Aluko & Salau investigated the gender-specific migration trends in Nigeria and their implications for demographic changes. To achieve this, they employed a comparative analysis approach, drawing on data from various sources, including national surveys and census records. The findings of the study revealed distinct patterns of migration between genders, with men predominantly engaging in ruralto-urban migration, while women exhibited higher rates of rural-to-rural and urban-to-rural migration. These gendered migration patterns were found to have significant consequences on demographic structures in different regions of Nigeria. As a result, the study recommends the development of gender-sensitive policies and programs that address the specific needs and challenges faced by both male and female migrants to foster more inclusive and equitable demographic changes in the country. On the other hand, the current study focused on exploring the migration patterns and changing demographics in Nigeria. Secondly, a methodological gap also presents itself, for example, Ojo, Aluko & Salau in their study on the gender-specific migration trends in Nigeria employed a comparative analysis approach, drawing on data from various sources, including national surveys and census records. Whereas, the current study adopted a desktop research method. --- CONCLUSION AND RECOMMENDATIONS --- Conclusion This study has provided valuable insights into the complex interplay between migration dynamics and demographic transformations within the Nigerian context. Over recent decades, Nigeria has experienced remarkable changes in its population structure, characterized by rapid population growth, a significant youth bulge, and increasing urbanization. This study has shed light on the multifaceted nature of these demographic shifts and their relationship with various forms of migration, both domestic and international. Firstly, the research has highlighted the critical role of migration in shaping Nigeria's changing demographics. Factors such as rural-to-urban migration, international migration, and internal displacement have been identified as key drivers of demographic change. For instance, the urbanization process, fueled by rural-to-urban migration, has contributed to the concentration of Nigeria's population in urban centers, resulting in shifts in living patterns, employment opportunities, and social dynamics. Secondly, the study has emphasized the importance of understanding the demographic consequences of these migration patterns. Nigeria's youthful population presents both opportunities and challenges, and the research has elucidated how migration can impact age structure and population distribution. This knowledge is indispensable for policymakers, urban planners, educators, and healthcare providers as they grapple with the evolving needs of a diverse and growing population. Furthermore, the study has uncovered the significance of international migration as a response to economic disparities between Nigeria and more developed countries, aligning with the Dependency Theory. Many Nigerians seek economic opportunities abroad, and this phenomenon has implications for both the Nigerian economy and the receiving countries. Understanding the economic underpinnings of international migration is crucial for informed policy development and bilateral cooperation. In conclusion, "migration patterns and the changing demographics of Nigeria" has not only identified the intricate connections between migration and demographic shifts but has also provided a comprehensive view of how these changes impact various facets of Nigerian society. The findings of this study are expected to benefit a wide range of stakeholders, including government officials, urban planners, healthcare providers, and educators, by informing evidence-based policies and strategies to address the challenges and opportunities posed by Nigeria's evolving demographics. As Nigeria continues to navigate its demographic transition, this research will remain instrumental in guiding informed decision-making and fostering sustainable development. --- Recommendations Policymaking and Urban Planning: One of the key recommendations from a study on migration patterns and changing demographics in Nigeria could focus on the need for evidence-based policymaking and urban planning. As migration continues to shape the demographic landscape, policymakers should consider the implications for resource allocation, infrastructure development, and social services provision. Recommendations might include the development of comprehensive urbanization and migration policies that take into account the unique challenges posed by rapid urban growth. Data Collection and Monitoring: To inform effective policymaking and planning, it is crucial to recommend improvements in data collection and monitoring systems. Researchers could suggest the establishment of a robust and regularly updated database on migration patterns, demographic changes, and urbanization trends in Nigeria. This would aid in tracking changes over time, identifying emerging migration trends, and understanding their impact on various demographic parameters. Additionally, researchers could recommend the integration of geographical information systems and modern technology for data collection, allowing for real-time monitoring and analysis of migration patterns. Community Engagement and Integration: A significant recommendation could focus on community engagement and integration strategies. As migration leads to increased diversity and urbanization, fostering social cohesion and inclusivity becomes paramount. Researchers might suggest initiatives that promote cultural exchange, tolerance, and integration among diverse populations in urban areas. Community-based programs, educational campaigns, and awareness-building efforts could help reduce potential social tensions and promote a sense of belonging among migrants and the host communities.
The main objective of this study was to investigate migration patterns and the changing demographics of Nigeria.The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive's time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library.The findings revealed that there exists a contextual and methodological gap relating to the migration patterns and the changing demographics in Nigeria. Preliminary empirical review revealed the intricate relationship between migration dynamics and demographic transformations in Nigeria. It reveals that migration, both domestic and international, plays a pivotal role in shaping Nigeria's evolving population structure, particularly in terms of rapid population growth, a significant youth population, and urbanization. The research emphasizes the importance of understanding the demographic consequences of migration patterns and the economic drivers behind international migration. Ultimately, the study's findings hold significance for policymakers, urban planners, educators, and healthcare providers as they grapple with the challenges and opportunities presented by Nigeria's changing demographics, guiding evidence-based decision-making and sustainable development efforts.The Demographic Transition theory, Push and Pull theory and the Dependency theory may be used to anchor future studies on changing demographics. The recommendations from the study highlighted the importance of evidence-based policymaking, data collection, and community engagement. Policymakers should prioritize comprehensive urbanization and migration policies that consider resource allocation and services for a diverse, youthful population. It is crucial to establish a robust database and employ modern technology for monitoring migration trends. Additionally, fostering social cohesion and inclusivity through community-based programs and cultural integration initiatives is essential for peaceful coexistence in evolving urban areas.
sensitive to group-based social hierarchies in their society. We focus in particular on children's perceptions of wealth, one observable aspect of social status. Determining whether children represent differences between the relative status of different racial groups is important given evidence that people tend to believe that the way things are is the way they ought to be . Observing status hierarchies in their society may lead children to see higher-status racial groups as more deserving of their status; even more insidiously, children may set their own aspirations according to their perception of their group's status in society. We focus on children in South Africa, a country with a long history of race-based status differences, including wealth disparities that persist to the present day. --- Wealth as an Indicator of Social Status Though status can take many forms, one especially important cue to the social status of individuals and groups is wealth. In nearly all societies, wealth and material resources distinguish groups from one another , and conflict over valuable resources can, and often does, ignite violence and civil war . However, wealth as a cue to status is understudied in research on child development, perhaps because wealth may not be seen as directly relevant to children: children do not typically possess much money, nor do they have a deep or accurate understanding of economics . Nevertheless, children are likely to observe indications of wealth every day, because wealth is used to purchase items that surround children in their daily lives . Unlike some of the more symbolic cues to status, such as political power or family lineage, wealth can be inferred from direct observations of higher-and lower-value personal belongings, and thus may serve as a particularly important indicator as children learn about the relative status of the individuals and groups in their society. Previous research suggests that children show some awareness of cues that signify differences in wealth . For example, 3-to 5-year-old children can use visual cues to differentiate "rich" and "poor" people ; first-grade children assign people depicted as rich to fancier cars and houses ; and by nine years of age children endorse stereotypes that rich people are better than poor people in domains such as academics and music . Research has also shown that children and adolescents understand some of the social factors that contribute to wealth and poverty . --- Children's Use of Group Differences in Social Status Previous research provides some evidence that children are sensitive to the relative status of social groups in their environment. For example, in an experimental manipulation of group status, elementary school-aged children were assigned to novel groups depicted as consistently high-or low-status in a variety of domains . When teachers used group labels and distinctions to organize the classroom , children arbitrarily assigned to the higher-status novel group liked their own group more than children assigned to the lower-status group like their group . This pattern of preference for high-over low-status novel groups provides evidence that children are sensitive to some cues to the relative status of groups, and that these cues can guide children's intergroup attitudes . However, since previous studies of children's perceptions and preferences based on social group status have used experimenter-defined novel groups rather than familiar groups , it remains to be seen how these findings apply to children's understanding of group status in their everyday lives. Decades of research on racial attitudes provide indirect evidence that children are sensitive to differences in the status of groups in the real world. In a wide variety of settings, children who are members of higher-status racial groups show more ingroup favoritism than children from lower-status racial groups, at least prior to middle childhood . For example, White American children in preschool and elementary school show more ingroup favoritism than Blacks or Hispanics ; White Canadian children show more ingroup favoritism than First Nations children ; White New Zealander children show more ingroup favoritism than Maori children ; White British children show more ingroup favoritism than West Indian or Asian children ; and White South African children show more ingroup favoritism than Black South African children, who show no ingroup favoritism . The widespread documentation of preference asymmetries in early childhood suggests that children's social attitudes early in life may be influenced by the relative status of groups in their society. However, it is important to note that some research shows that patterns of racial preferences among low-and high-status children can change during middle childhood . While much of the previous research highlights differences in children's evaluations of their own ingroup relative to an outgroup, researchers have observed status-consistent evaluations of children's outgroups as well. Children from lower-status racial groups generally prefer higher-status racial groups over other lower-status groups. For example, Black American children favor Whites over Hispanics ; Hispanic American children favor Whites over Blacks ; Asian British children favor Whites over West Indians, and West Indian British children favor Whites over Asians ; and Coloured South African children favor Whites over Blacks . Similarly, children from high-status racial groups favor other higher-status racial groups over lower-status racial groups. For example, White Australian children favor Asians over Aborigines ; Taiwanese children favor Whites over Blacks ; and White South African children favor Coloureds over Blacks . Taken together, this large body of work suggests that children may be sensitive, at least implicitly, to the relative status of racial groups, as indicated by their relative preference for higher-status racial groups on attitude measures. A handful of studies provide more direct evidence that children associate race and social status. Bigler, Averhart, and Liben investigated Black American children's perceptions of racial differences in occupational prestige. Black children reported that Whites were more likely to have familiar higher-status jobs and that Blacks were more likely to have familiar lower-status jobs . Most provocatively, when children were introduced to a novel occupation depicted with a Black person, children thought that this occupation was lower in status than when they were introduced to the same novel occupation depicted with a White person . These results demonstrate that at least some children think that there are differences in the relative occupational prestige of racial groups. Two previous studies investigated whether American children believe there to be a relation between race and wealth. Zinser, Rich, and Bailey asked children directly whether a particular White or Black child was poorer. Third and fifth grade children systematically believed the Black child to be poorer, but preschoolers and first grade children showed no such belief . In the study closest to the present research, Radke and Trager asked a group of kindergarten, first grade, and second grade American Black and White children to match Black and White paper dolls with indicators of differential status . They observed that both Black and White children tended to assign Whites to high-status neighborhoods and Blacks to low-status neighborhoods, but did not make systematic associations for clothing. While no statistics were reported, the authors indicated that there was an association between the degree to which children showed this association and the degree to which children favored Whites on a measure of racial attitudes. The present studies investigate whether children in modern-day South Africa show similar associations between race and wealth. --- South Africa The possibility that children could observe and represent a connection between race and wealth is especially likely in a country like South Africa, where the present research was conducted. South Africa has a history of carefully delineated status distinctions marked by racial group membership. The apartheid policies implemented in South Africa from 1948 to 1994 explicitly laid out the hierarchy of racial groups, determining the distribution of almost all aspects of social status: wealth; job prospects; political power; and access to better education, transportation, and health care . While the official apartheid policies have been abolished for almost two decades, and while some aspects of status have changed, large wealth disparities between racial groups remain. The average annual household income for Whites in South Africa was approximately $38,000 in 2005-2006, whereas the average annual income for Coloureds was approximately $11,000, and for Blacks was approximately $5000 . Income disparities between South African racial groups have cascading effects. Whites, Coloureds, and Blacks tend to live in different types of housing: while 95.1% of Whites and 85.7% of Coloureds in South Africa live in "formal dwellings" , only 55.5% of Blacks do, with most of the rest of the Black population living in "traditional" or "informal dwellings" . Household possessions also differ systematically by racial group: a White household is more likely to own a telephone, a television, or a car than a Coloured household, which is in turn more likely to own these items than a Black household . In addition, Whites have higher levels of educational attainment than Coloureds or Blacks . Because of the strong covariation between racial group membership and concrete cues to wealth in South Africa, research in South Africa affords a unique opportunity to investigate children's developing knowledge about the relation between status and racial groups within their society. Recent research conducted in South Africa provides evidence that children favor White and Coloured people over Black people, regardless of their own group membership . Such findings may have been expected during the apartheid years when the South African government explicitly favored White people over people from other racial groups. However, the observed pro-White bias is somewhat surprising in modern-day South Africa -a country where Blacks are more numerous than Whites, where the government promotes equality and tolerance, and where White people do not hold disproportionate political power . South Africa is a particularly interesting place to study children's perceptions of correlations between wealth and race because Whites, who hold the most wealth, constitute a statistical minority group. Thus, unlike in the U.S., children in South Africa cannot come to veridical inferences about average wealth by simply noting the group that is most familiar, most numerous, and highest in political power. The current studies were designed to help explain the continued presence of South African children's racial preferences by exploring whether South African children might be sensitive to one facet of social status that is still overwhelmingly apparent in South Africa: racial differences in average levels of wealth. Specifically, the present studies test whether children represent a strong relation between wealth and race. Despite our prediction that children are sensitive to the relation between race and wealth, there are some reasons to suspect that South African children may not think that Whites are the wealthiest group. Because of a large number of indicators of recent success by South African Blacks, children may think that Blacks are higher in wealth because many Blacks appear in visible, high-status positions-most notably, current South African President Jacob Zuma, but also notable political and social figures such as Nelson Mandela and Bishop Desmond Tutu, as well as popular musicians or athletes such as Miriam Makeba or Lucas Radebe. There has also been a recent increase in the number of middle class Black South Africans ; this could give children the impression that there are no longer many poor Blacks. Alternatively, children may be egocentric and think that their group is the wealthiest, irrespective of the actual status of their own ingroup; this would be consistent with previous research suggesting that children often have positively-skewed views of their own groups . --- The Present Research The present studies investigated whether children are aware of racial group differences in wealth. We explored this question in South Africa, a country where the association between race and status is particularly strong, and therefore potentially salient even to young children. Despite the potential alternative hypotheses discussed above, we hypothesized that children would be aware of the specific racial status hierarchy in their country , and that this awareness may develop in early childhood; this hypothesis is consistent with the majority of the social attitudes literature to date. To test this hypothesis, 3-to 10-year-old children were presented with depictions of higher and lower wealth and asked children to match these pictures with people from different racial groups. Through this matching paradigm, we were able to assess whether children reliably associate individuals of particular racial groups with higher wealth. Children aged 3-10 years were selected because previous work suggested that South African children in this age range show racial attitudes that align with the dominant racial hierarchy-favoring Whites the most, then Coloureds, then Blacks . --- Study 1 Study 1 investigated whether South African children are aware of wealth differences among racial groups. In addition to assessing whether children differentially associate racial groups with indicators of wealth, children's racial attitudes were assessed for comparison with previous research in South Africa . We predicted that participants would show a preference for higher-status racial groups irrespective of participants' own racial group membership. --- Method Participants-Sixty-four children aged 4-10 years at a racially diverse school in Cape Town, South Africa, participated in this study. The demographics of the school are largely reflective of the Cape Town region . All children participated in the study with a South African experimenter of their same race. Four children did not complete the preference task, and one child did not complete the matching task; these children are excluded from the relevant analyses. Materials and Design-The matching task and the preference task each consisted of 12 trials. All trials featured photographs of two South African people who differed in race but not gender presented on a computer screen. The lateral positions of faces of each race were counterbalanced across trials. Pairs were equated for approximate age, attractiveness, and expression. The 12 pairs of photographs in each task included 6 pairs of child faces and 6 pairs of adult faces . Two pairs in each age group featured a White face and a Black face, two pairs featured a White face and a Coloured face, and two pairs featured a Coloured face and a Black face. Stimuli were blocked by face age, but the order of different racial pairs within each block and gender within each block was randomized in one of four orders. Block order , lateral positions of faces of different races, positions of the photographs of high vs. low-value belongings, and which set of 12 photographs were used in the preference vs. matching tasks were counterbalanced across participants. The matching task also included hard copies of photographs of high-and low-value houses and cars. These photographs came from South African websites or were taken by the research team in the Cape Town area. South African research assistants confirmed the authenticity and typicality of the selected items. High-value houses were large, fancy houses likely to be occupied by someone of the highest social class. Low-value houses were small houses or shacks, likely to be occupied by someone from the working class. Similarly, highvalue cars looked fancy and new, and generally would be owned by someone from a higher social class. In contrast, low-value cars were older and looked heavily used, and generally would be owned by someone from a lower social class. Examples can be seen in Figure 1. Procedure-Participants completed two tasks: a matching task and a preference task. The order of tasks was counterbalanced across participants. Children were seated at a laptop computer for the duration of the study. For each trial of the matching task, the computer displayed two photographs of people who varied in racial group membership, and the experimenter presented two printed photographs of either houses or cars. The photographs of people were presented laterally on the screen and the houses or cars were presented one above the other in the laptop keyboard. Children were asked to indicate which person lived in which house, or which person rode in or drove which car, by placing the photograph of the house or car below or next to the images of the people on the screen. Pilot testing indicated that presenting the individual faces on the computer screen and allowing children to manipulate the physical pictures of the houses and cars was a highly engaging task for participants that also minimized the necessary dexterity required of our young participants. For each trial of the preference task, the computer displayed two photographs of people who varied in racial group membership. Children were then asked to indicate whom they liked more by pointing to that person on the computer screen. --- Results Data Preparation-Scores for the matching task were computed for each racial comparison by calculating the percentage of time that children matched the person from the higher-status racial group with the higher-value house or car. Scores for the preference task were computed for each racial comparison by calculating the percentage of time children said they preferred the member of the higher-status racial group. In addition to the specific comparisons, overall "High-Status Matching" and "High-Status Preference" composites were computed by averaging the three racial comparison scores which ranged from 0 to 100 . There was no suggestion of a relation between age and the tendency to associate highervalue belongings with higher-status races, r = .12, p > .35, nor between age and the tendency to favor higher-status races, r = .06, p > .60. Additionally, there were no significant effects of order on the matching results or the preference results, ps > .10. As such, age and order were not included in the following analyses, unless noted. Primary Results-Consistent with our hypothesis, participants associated higher-value belongings with higher-status groups overall, chance = 50%; M = 81%, one-sample t-test: t = 15.24, p < .001. Participants were more likely to match higher-value belongings with Whites than with Blacks, as compared to chance , M = 87%, one-sample t test: t = 13.95, p < .001. Participants were also more likely to match higher-value belongings with Whites than with Coloureds, M = 76%, t = 7.94, p < .001. Finally, participants were more likely to match higher-value belongings with Coloureds than with Blacks, M = 79%, t = 10.00, p < .001. None of these associations varied by participant race, all ps > .15, though see Table 1 for means by participant race. Though the sample size for some racial groups was small, all racial groups showed the same tendency to match higher-value belongings with higher-status races. Consistent with previous work , there was an overall tendency for participants to prefer higher-status racial groups over lower-status racial groups, chance = 50%; M = 73%; one-sample t-test: t = 10.10, p < .001. Participants preferred Whites relative to Blacks, M = 81%, t = 11.50, p < .001; Whites relative to Coloureds, M = 67%, t = 4.51, p < .001; and Coloureds relative to Blacks, M = 71%, t = 5.40, p < . 001; one-sample t-tests. These preferences did not significantly differ by participant race, as indicated by one-way ANOVAs, all ps > .15, though see Table 2 for means by participant race. If children's knowledge of systematic racial differences in wealth contributes to the development of racial attitudes, individual differences in performance on the matching task might correlate with racial attitudes. A marginally significant relation between these measures was observed, suggesting that the more a given child believed higher-status races to have higher-value belongings, the more that child expressed a preference for higher-status racial groups, r = .24, p = .073. To explore the apparent absence of age effects, the 18 children who were 4-6 years of age were analyzed separately. This young sample still demonstrated a significant tendency to associate higher-value belongings with higher-status races, t = 6.83, p < .001, and a significant preference for higher-status races, t = 5.10, p < .001. Figure 2 shows the results of the matching task and preference task by age group, comparing the 4-to 6-yearold children to the 7-to 10-year-old children. Supplementary Results-In addition to the primary studies of interest, we explored several supplementary questions. To reduce the likelihood of finding significant results due to chance, a Bonferroni correction was applied to these analyses. To assess whether effects were consistent across stimulus types, separate analyses for each type of stimuli used to represent race were conducting using a Bonferroni correction for the number of tests conducted . In the matching measure, children showed significant tendencies to match higher-status races with higher-value belongings for male pairs, M = 78%, t = 9.96, p < .001; female pairs, M = 81%, t = 11.90, p < .001; child pairs, M = 74%, t = 8.06, p < .001); and adult pairs, M = 85%, t = 14.55, p < .001. Similarly, on the preference measure, participants preferred the higher-status racial group for all stimuli used to represent race: male pairs, M = 73%, t = 7.83, p < .001; female pairs, M = 73%, t = 7.89, p < .001; child pairs, M = 70%, t = 8.29, p < .001; and adult pairs, M = 75%, t = 9.25, p < .001. Because the income disparity between Whites and Blacks is larger than the income disparity between Whites and Coloureds or between Coloureds and Blacks , we also predicted that the tendency to match the higher-status group with the higher-value belongings would be most pronounced on trials featuring one White face and one Black face, as compared to trials featuring comparisons between White and Coloured faces or Coloured and Black faces. This hypothesis was tested with a paired sample t-test, using a Bonferroni correction for the number of tests conducted . The tendency to associate Whites with higher-value belongings when the comparison group was Blacks was indeed larger than when the comparison group was Coloureds, t = 3.08, p = .003. The tendency to associate Whites with higher-value belongings than Blacks was not significantly larger than the tendency to associate Coloureds with higher-value belongings than Blacks, t = 2.22, p = .030, using this conservation correction. --- Discussion Consistent with our prediction, South African children in Study 1 indicated that members of higher-status races were more likely to live in fancy houses and drive fancy cars than were members of lower-status races. Interestingly, there was no relation between participant age and the demonstrated association between higher-value belongings and higher-status races. In fact, the subset of children aged 4-6 years showed a strong tendency to match higher-value belongings with higher-status races, suggesting that this tendency emerges by the early primary school years. Thus, whether through cultural learning of stereotypes, media portrayals, or incidental observation, knowledge of this association appears early, and seems not to strengthen or weaken throughout the primary school years. Similarly, we found no age-related changes in explicit attitudes across our wide age range, despite previous work demonstrating a developmental change in other cultural contexts . A recent meta-analysis demonstrated that children from higher-status groups show a shift from robust ingroup favoritism to no ingroup preference, while children from lower-status racial groups show a shift in the opposite direction from no explicit preference to a strong ingroup preference . While these patterns are provocative, such shifts in ingroup attitudes have not been observed in recent studies of racial attitudes in South African children aged 3-13 years . That said, because our sample size was small and our age range was large, future work might investigate whether subtle age differences exist that we did not have enough power to detect. We hypothesized that children would match higher-value belongings with higher-status races on account of a learned specific association between race and wealth. However, this pattern of results could also be explained by appeal to a different mechanism: perhaps children see no specific association between race and wealth, but instead rely on their preferences to guide their matching. That is, participants may have matched their favored items with their favored racial groups , without any a priori theory about a specific relation between race and wealth. To assess the validity of the latter explanation we conducted a control study testing children's expectations about gender, rather than race. --- Study 2 Decades of research on children's gender attitudes suggest that children show strong preferences for others of their own gender . If performance on the matching task in Study 1 was guided by social preferences, children in Study 2 might match higher-value items with people of their own gender. However, if children's matching performance in Study 1 was guided by beliefs about the relative status of specific groups in their society, children might not match higher-value items with people of their own gender, since there is no reason to suspect that children have learned that their own gender is wealthier than the other gender. While children in a racially diverse town or school might regularly see covariation between race and wealth, they probably do not see covariation between gender and wealth, since boys and girls come from the same neighborhoods and even the same houses. Thus, we predicted that children would show little to no systematic matching of higher-value items with own-gender faces, despite showing a strong preference for own-gender faces. --- Method Participants-Fifteen Coloured participants who had completed the race version of Study 1 completed this study on a subsequent day within the same week. Unfortunately, due to constraints in testing not all participants from Study 1 completed Study 2. However, the subset of Study 1 participants who completed Study 2 did not differ from those who did not complete Study 2 on either the race preference , or race matching tasks, suggesting they were representative of the initial sample. Materials, Design, and Procedure-The procedure for the matching and preference tasks was identical to the procedure of Study 1, with the following changes: each trial of each task included one male and one female who did not differ in race. As in Study 1, half of all trials featured pictures of children and half featured pictures of adults. Four trials featured photographs of Coloured people, four trials featured photographs of Black people, and four trials featured photographs of White people. The lateral position of faces of the two genders was counterbalanced across trials. Block order , lateral positions of faces of different genders, and which set of 12 photographs were used in the preference vs. matching tasks were counterbalanced across participants. --- Results Data Preparation-Composite scores similar to those in Study 1 were created to assess "Same-Gender Matching" and "Same-Gender Preference." Age was not significantly correlated with the tendency to prefer one's own gender, r = -.01, p > .90, or with the tendency to associate higher-value belongings with one's own gender, r = -.02, p > .90. Primary Results-Consistent with decades of research, participants showed a samegender over other-gender preference: they selected same-gender peers 72% of the time , one-sample t-test: t = 6.16, p < .001. This tendency did not differ as a function of participant gender, p > .60, though see Table 2 for means by gender. As the table indicates, both boys and girls showed the same general patterns of response. Importantly, participants' overall preference scores for same-gender targets did not differ in magnitude from their own preference scores for higher-status races in Study 1, p > .50, making comparison between matching performance in Studies 1 and 2 straightforward. Children associated higher-value belongings with members of their own gender more often than chance , M = 63%, t = 3.11, p = .008. This tendency did not differ as a function of participant gender, p > .60, though see Table 3 for means by gender. In contrast to the race-preference task and gender-preference task where performance was nearly identical, participants in Studies 1 and 2 did differ in their performance on the racematching and gender-matching tasks, t = 4.56, p = .001, paired t-test. As illustrated in Figure 3, participants matched higher-value belongings with higher-status racial groups more than with their preferred gender group . . Supplementary Results-Further analyses focused on the robustness of children's preferences and matching tendencies across different types of stimuli. Participants preferred people of their own gender for all stimuli used to represent gender, with a Bonferroni correction for the number of tests conducted : White pairs, M = 68%, t = 3.21, p = .006; Coloured pairs, M = 77%, t = 5.87, p < .001; Black pairs, M = 72%, t = 3.66, p = .003; child pairs, M = 74%, t = 4.79, p < .001; adult pairs, M = 70%, t = 4.58, p < .001. Tendencies to match higher-value belongings to children's own gender as functions of the stimuli used to represent gender are not all significant when we apply a Bonferroni correction for the number of tests conducted : White pairs, M = 62%, t = 1.83, p = .089; Coloured pairs, M = 62%, t = 2.17, p = .048; Black pairs, M = 67%, t = 2.32, p = .036; child pairs, M = 66%, t = 2.82, p = .014; adult pairs, M = 61%, t = 2.09, p = .055. This suggests that the tendency to associate one's own gender with higher-value belongings is not robust. Comparing Studies 1 and 2-It is possible that children failed to show as strong an association between their own gender and wealth as they showed between race and wealth because they believed that one gender is higher in status than the other gender . However, there was no evidence that this was the case; children were just as likely to associate higher-value belongings with females as with males . A second way to ask whether children more strongly associate race with wealth than gender with wealth is to conduct nonparametric analyses on individual-level data. Of the participants included in Study 2, the majority of participants matched higher-status racial groups with higher-value belongings on at least 10 out 12 trials, yielding abovechance performance for each of those children individually . According to the same criterion, only 7% of participants consistently associated their own gender with higher-value belongings. A Wilcoxon Signed Ranks Test confirmed that significantly more participants showed a significant tendency to associate race with wealth than gender with wealth, Z = 3.16, p = .002. --- Discussion Study 2 provides evidence that children's strong tendency to associate higher-value belongings with higher-status races, as demonstrated in Study 1, cannot be explained by a simple strategy of matching preferred, higher-value items with preferred groups. Although children in Study 2 did match higher-value belongings with their preferred gender, they did so to a much lesser extent than they did with preferred racial groups in Study 1. While the sample size of this study was small, this difference between the treatment of race and gender was remarkably robust; in fact, when analyzing individual children's performance only 7% of participants completing both tasks showed a significant association between their own gender and wealth, compared to 79% who showed a significant association between race and wealth. Although the comparison of Studies 1 and 2 suggests that children assign race to wealth more systematically than gender, the methodology employed in these studies did not allow us to pit race and gender directly against one another within the same task. Moreover, participants did not have the option to indicate that no one owned a lower-value belonging, and all participants completed the race tasks before the gender tasks, perhaps priming children to think about these tasks in terms of race. --- Study 3 Study 3 was designed to be a second and more stringent test of our hypothesis that children associate wealth with race more strongly than with gender. In Study 3, targets were presented individually and participants were asked to pair each target person with either a higher-or lower-value belonging. Responses to each target could therefore be analyzed both by race and by gender to determine which feature children weighed more heavily when making inferences about wealth. Additionally, this method allowed children to match every target person, irrespective of race or gender, with higher-value belongings, rather than requiring children to match some people with lower-value belongings. --- Method Participants-Twenty children aged 3-10 years participated in the study . Procedure-Children were seated at a laptop computer for the duration of the study. There were 12 trials, and each trial featured a single child target; there were an equal number of targets from each racial group , and an equal number of male and female targets in each racial group . Trials were presented in a single randomized order to all participants. On each trial, the computer displayed the photograph of the target child; then, the experimenter presented printed photographs of two personal belongings differing in value . Lateral position of the higher-and lower-value belongings was counterbalanced across trials. Participants were asked to indicate which house the target child lived in or which car the target child rode in; pairings of specific targets to houses vs. cars were counterbalanced across participants. --- Results -Collapsing across target race and gender, participants were more likely than chance to match higher-value rather than lower-value belongings to target people , M = 69% to high status, t = 4.35, p < .001. The strength of this effect differed as a function of the racial group of the target face, F = 13.64, p < .001, repeated-measures ANOVA. Higher-value belongings were matched with White targets 89% of the time, significantly more often than chance, t = 11.46, p < .001; and with Coloured targets 70% of the time, again more often than chance, t = 2.79, p = .012; but higher-value belongings were matched with Black targets only 49% of the time, which did not differ from chance, t < 1, p > .80. Higher-value belongings were associated with White targets more often than with Coloured targets, t = 2.26, p = .036, or with Black targets, t = 5.62, p < .001; and with Coloured targets more often than with Black targets, t = 2.82, p = .011; see Figure 4. As discussed above, these data could also be analyzed according to gender. Higher-value belongings were matched with both groups more often than chance, own-gender: t = 4.65, p < .001; other-gender: t = 2.67, p = .015; see Figure 4. There was a trend to match higher-value belongings with own-gender targets more often than with other-gender targets , but this trend was only marginally significant, t = 1.74, p = .097. --- Comparisons of Race and Gender: Higher-value belongings were matched with White targets more often than with same-gender targets, t = 2.68, p = .015, or other-gender targets, t = 4.71, p < .001, as indicated by paired t-tests. Higher-value belongings were matched with Black targets less often than same-gender targets, t = 5.28, p < .001, or other-gender targets, t = 3.04, p = .007. Higher-value belongings were as likely to be matched with Coloured targets as with same-gender targets, t = 0.75, p > .40, or othergender targets, t = 1.06, p > .30. --- Discussion The results of Study 3 parallel those from Studies 1 and 2. In Study 3, children showed a strong tendency to associate higher-value belongings with higher-status races, and only a marginally significant tendency to associate higher-value belongings with their own gender. The tendency to associate wealth with race more than with gender was especially pronounced in comparisons of White vs. Black targets: children strongly associated highervalue belongings with Whites, and they were less likely to associate higher-value belongings with Blacks than with any of the other racial or gender groups. One interesting and unexpected pattern observed in Study 3 was that children were equally likely to assign Blacks to higher-value and lower-value houses. While our previous findings suggested that children might systematically assign Blacks to lower-value houses, these results suggest that the forced choice design employed in Studies 1 and 2 may have been driven primarily by the belief that Whites and Coloureds live in higher-value houses, rather than the belief that Blacks live in lower-value houses. Alternatively, children may have been generally hesitant to indicate that anyone lives in lower-value belongings, a claim supported by the fact that across all targets children assigned most people to higher-value belongings. Therefore, children's chance performance may indicate conflict within children-on the one hand a tendency to match Blacks with lower-value belongings because of a belief that Blacks tend to live in lower-value houses or ride in lower-value cars, while on the other hand a reluctance to assign low-value items to anyone. Future work might assess whether children associate Blacks with other indicators of lower social status, and ask children to provide reasons for their choices. Due to the small sample size and the large age range, we urge caution in interpreting the results of Study 3. Two limitations of running our studies in this Cape Town school were the limited amount of time children could participate in these studies and the limited number of participants available; after running participants in the primary and control studies, there were relatively few children available to participate in Study 3. However, the convergence of the results across all three studies, including a much larger sample in Study 1, serves to allay some of these concerns. --- General Discussion Taken together, the findings from the present studies indicate that South African children associate particular racial groups with different levels of wealth, one salient aspect of social status. Participants paired higher-value belongings with members of higher-status racial groups and paired lower-value belongings with members of lower-status racial groups . When thinking about wealth, children made consistent use of racial group membership to a greater extent than another salient social category, gender. The most parsimonious explanation for this pattern of results is that children, irrespective of their own group membership, held a consensual view about who is wealthier. Participants did not indicate that South Africa's largest group was the wealthiest, nor did they indicate that their own racial group was the wealthiest . This finding builds on previous research on children's attitudes suggesting that children might have knowledge of racial groups' relative social status , as well as research showing that children link race with indicators of social status . Importantly, parallel results were found across two methods of assessing the association between race and wealth. In Study 1, children were asked to match pairs of belongings denoting different levels of wealth with pairs of individuals differing in race. Children were more likely to associate higher-value belongings with Whites than with Coloureds or Blacks; children were also more likely to associate higher-value belongings with Coloureds than with Blacks. In Study 3, a more conservative approach was applied that did not require children to associate any targets with lower-value belongings: children were asked to match either higher-or lower-value belongings with target individuals presented one at a time. While these children had an overall tendency to associate people with higher-value belongings, they did so to varying degrees for each racial group: consistent with the first study, higher-value belongings were most likely to be paired with Whites, followed by Coloureds, followed by Blacks, who were just as likely to be matched with higher-value belongings as lower-value belongings. Even with this more conservative approach, children conformed to a clear racial hierarchy when associating racial groups with indicators of wealth. Race appears to be a social category that is particularly deeply associated with differences in wealth. In Studies 2 and 3, children showed a small tendency to match higher-value belongings with their preferred gender, but this tendency was significantly smaller than the tendency to match higher-value belongings with higher-status racial groups. This pattern of results suggests that something beyond preference-presumably, some representation of the covariation between race and wealth-was driving children's consistent tendency to associate differing degrees of wealth with racial groups in accordance with the racial hierarchy in place in their society. Unlike research conducted three decades ago in the U.S. suggesting that children below third grade did not connect race to wealth disparities , the young children in the present studies clearly showed these associations. In fact, the tendency to associate race with wealth was just as strong in our youngest participants as in our older participants , suggesting that this association is not learned later in childhood; rather, it is already in place by the early primary school years. While our sample size was too small to assess more subtle age-related differences, future work might probe the question of age-related changes using a larger sample. For example, these data do not shed light on whether an initial representation of group status is created early and then remains fixed throughout childhood, or whether this representation continues to be updated . Future studies might attempt to determine whether the difference in results between the present findings and those of Zinser et al. is due to a stronger association between race and wealth in South Africa than the U.S., historical changes over time, methodological differences between the tasks, or some other combination of factors. These results raise the question of how children come to form associations between race and wealth. In South Africa there are real statistical differences in wealth between racial groups ; children could observe these differences firsthand by noticing the types of neighborhoods that racial groups tend to live in, the types of transportation people use, or any number of other indirect indicators of wealth apparent in daily life. Alternatively, or in addition, children could learn this information from media portrayals in which Whites may be shown working in higher-earning occupations, living in fancier neighborhoods, or attending better schools. Finally, children may acquire cultural stereotypes about racial differences in wealth from their family, peers, or community. These results suggest that, however children are learning the association between race and status, information that runs counter to this association-such as the increasing presence of the Black middle and upper class, the rise of Black political and social power, and the fact that Blacks are the statistical majority-is not strong enough to eliminate the perception of this association from the minds of young South Africans. Future research should focus on how children come to associate race with wealth and on how to eliminate this association if the disparity itself is not eliminated. The present studies focused on wealth as an indicator of social status, because wealth is likely to be salient in young children's lives, especially in South Africa-but, of course, wealth is just one cue to social status. As reviewed above, there is evidence that older Black American children associate U.S. racial groups with occupational status . Other aspects of social status are also likely to co-vary with racial group membership in reality, through media portrayals, and in cultural stereotypes. Future research might investigate a full range of factors that children might interpret as being indicative of, or correlated with, social status. For example, do children pick up on cues such as educational achievement, neighborhood, or access to more powerful social networks? If so, do they think that racial groups differ systematically on these dimensions? Furthermore, future research might explore children's associations between social status and perceived power and social mobility. Even with our modest sample sizes, we observed similar results across multiple studies and multiple ages; nonetheless, future research is required to draw strong conclusions and to determine the generalizability of our findings to other environments within Cape Town, to settings outside of Cape Town, and to communities outside of South Africa. The Cape Town region is one of the most diverse areas of South Africa, and the participants in the present studies had extensive exposure to all major racial groups at their school; whether children with less firsthand exposure would show these effects remains an open question. Additionally, this initial research was conducted in South Africa because of the extremely strong relation between race and social status; might children show the same associations in countries where a more subtle relation exists between race and social status? For example, will children in the U.S. show a similar tendency to associate specific racial groups with differing levels of wealth or other indicators of social status? In addition to children's tendency to associate higher-status racial groups with higher-value belongings, South African children tended to prefer individuals who are members of these higher-status racial groups. This pattern of results is consistent with previous findings , and is best summarized as a tendency for children to prefer groups that are high in social status. Only some children demonstrated a robust preference for their ingroup, and no children demonstrated a robust preference for the racial group that currently comprises the statistical majority in their country and wields the most political power , nor for the racial group that currently constitutes the statistical majority in their region of the country . Instead, children from all racial groups demonstrated preferences congruent with the de facto racial hierarchy in their society. Despite the number of critical changes that have occurred over the last 20 years in South Africa, children's racial attitudes, at least in the Cape Town region, appear to mirror those of South African children in the apartheid years -perhaps because so many indicators of social status are still disproportionately allocated to Whites as compared to Coloureds and Blacks. Cues like the value and quality of personal belongings, in addition to other indicators of social status, may constitute the type of information that children use in determining how much their society values racial groups relative to each other. This understanding, in turn, may influence children's attitudes toward other groups and toward their own. The longerterm implication of associating racial groups with varying degrees of status is unknown, but there are reasons to suspect such associations are problematic. Of great concern is the general tendency for people to believe that the way things are is the way they are supposed to be , as well as the tendency to preserve the perceived status quo . Insofar as children perceive there to be differences in the relative status of racial groups, these tendencies may result in children believing that racial group differences in status are justifiable, and even normative. Taken one step further, if children believe that racial differences in status are justifiable, they may alter their own behavior accordingly-for example, seeking lower-status occupations if they are members of lower-status groups, or otherwise altering their own expectations for themselves and other members of their group. --- Conclusions These studies are unique in demonstrating that by the primary school years, South African children are acutely aware of correlations between wealth and racial group membership. These data are important in informing both discussions of children's understanding of social status and also broader discussions of how children evaluate the people around them, how children think about their own status, and how children think about opportunities in their own lives. Parents and teachers often have the intuition that children know little about race. While the racial attitudes literature has suggested that this mentality is misguided, the present data suggest that children not only have attitudes about racial groups, but also make specific ascriptions of wealth associated with different racial groups in line with the de facto racial hierarchy of their society. Given our knowledge of errors in human logic , we call attention to the troubling possibility that these perceptions of hierarchies could color children's interpretations of how the world should be.
Group-based social hierarchies exist in nearly every society, yet little is known about whether children understand that they exist. The present studies investigated whether 3-to 10-year-old children (N=84) in South Africa associate higher-status racial groups with higher levels of wealth, one indicator of social status. Children matched higher-value belongings with White people more often than with multiracial or Black people and with multiracial people more often than with Black people, thus showing sensitivity to the de facto racial hierarchy in their society. There were no age-related changes in children's tendency to associate racial groups with wealth differences. The implications of these results are discussed in light of the general tendency for people to legitimize and perpetuate the status quo.social status; social groups; race; South Africa; attitudes; children Nearly every human society includes groups of people who vary in social status (Sidanius & Pratto, 1999). History is replete with examples of societies in which groups are clearly delineated by status, from the caste systems of India and New Spain to the Jim Crow policies of the American South. The country at the center of the present paper-South Africa-was home to one of the most notorious examples of legally-sanctioned social hierarchy: apartheid. From 1948 to 1994, the South African government built upon and strengthened an existing race-and privilege-based social hierarchy created by the Dutch and British colonial administrations. Apartheid laws enforced a strict race-based hierarchy with Whites as the highest-status group, Blacks as the lowest-status group, and groups like Coloureds (people of mixed racial heritage) and Indians in between (Finchilescu & Tredoux, 2010). Even societies without de jure hierarchy delineations often feature de facto groupbased hierarchies (Sidanius & Pratto, 1999). Most children are born into and develop in societies with legally enforced or culturally implied group-based hierarchies. The present paper investigates whether children are
Introduction In the modern era, technological advancements have dramatically transformed research and data analysis within the realm of social sciences. Big data, advanced statistical tools, digital platforms, and, notably, CAQDAS have broadened the scope of research possibilities. Tools such as NVivo, Atlas.ti, webQDA, and MAXQDA under the CAQDAS umbrella have revolutionised qualitative research. They enable scholars to efficiently organise, categorise and analyse vast amounts of textual, audio, and visual data. These software solutions offer nuanced coding mechanisms, facilitating a deeper understanding of the themes, patterns, and narratives present in the data. This systematic approach enhances the rigour and credibility of qualitative analyses and bridges the divide between qualitative and quantitative methods. Moreover, social media sentiment analysis and large-scale surveys provide researchers with more prosperous and diverse datasets. Machine learning and artificial intelligence techniques can now comb through these vast datasets precisely, identifying patterns and insights that were once beyond reach. Additionally, data visualisation tools make complex data more understandable, promoting a more explicit interpretation and communication of the results. Navigating the in the latter half of the twentieth century . CAQDAS was primarily linked with leveraging technology for quantitative content analysis and Grounded Theory . Within this domain, debates over the use of computers in qualitative research are most heated and continue to evoke strong emotions. Yet, with the digital revolution of the past two decades, the surge of new information technologies, and the rise of digital media, CAQDAS has expanded its reach and has found broader applications. This expansion, fuelled by digital advancements, has paved the way for more general applications in traditional qualitative research, encompassing narrative methods, interviews, and textual and visual content analyses. A fundamental transition occurred when state-of-the-art technology began integrating with conventional research methodologies, marking a transformative phase in the ever-evolving qualitative research methodology landscape. Through this synthesis of technology and tradition, CAQDAS bridges traditional data analysis methods and the modern, digitised methods enabled by technological advancements between historical methodologies and the demands of contemporary practices. It harmoniously merges quantitative and qualitative research methods, enhancing the depth of data visualisation. The social sciences, education, health and medicine, business, and the humanities are just a few of the fields that employ CAQDAS. For instance, qualitative researchers utilise it to gather, plan, and analyse substantial volumes of qualitative data from several teams in an international project and support analytical awareness and reflexivity . Additionally, it is utilised to help with record-keeping, organise and manage qualitative data, give an audit trail, improve and show methodological rigour, and allow researchers to advance from theory creation to description . CAQDAS plays a pivotal role in data management and the overarching research process within these disciplines. Its capabilities are vast, segmenting text, image, sound, and video into manageable units and semantically indexing them for in-depth analysis. The software suite includes functionalities ranging from coding, memoing, and paraphrasing to annotating, grouping, and network-building. Moreover, its tools aid in lexical searching, data retrieval, and comparisons, evolving into a comprehensive platform for researchers. Features like data visualisation, collaboration support across time and space, and facilitating quantitative and qualitative methodologies make CAQDAS a prominent tool in today's research. Notably, it assists in illustrating patterns and trends and aids qualitative researchers in visualizing code relations and creating semantic networks ). These multifaceted utilities ensure a transparent research process, bolstering methodological rigour and empowering scholars in their analytical endeavours. Contrary to the 1980s and 1990s, CAQDAS programs now include various functionalities dedicated to users in the humanities and social sciences. However, availability depends on the type of software. CAQDAS software is divided into three groups: licensed programs equipped with many advanced and exciting analytical functionalities, open-source tools that usually have a basic range of functionality, and online programs that are functionally advanced to varying degrees. These innovations have positioned CAQDAS at the forefront of the modern qualitative research revolution, striking a balance between the cherished methods of yesteryears and the technological potential of today. Their advantage, as opposed to licensed and open-source ones, is an extensive range of flexibility that enables the development or implementation of new algorithms and analytical techniques based on programming languages. Their advantage is the possibility of synchronous and asynchronous teamwork and combining qualitative and quantitative methods classes. The literature meta-analyses of the contemporary field of qualitative research show that the development of CAQDAS over the last three decades is reflected in the currently dominant narrative qualitative methodology . The publications in the field of CAQDAS show that software development and its analytical functionalities tend towards the methods and procedures of textual and visual data content analysis. Conversely, an emerging trend integrates computer-aided qualitative data analysis with methods from the digital humanities, natural language processing, and text mining . This convergence has given rise to the new interdisciplinary field of Digital Qualitative Sociology . The rapid advancement of technology has reshaped many sectors, with qualitative research being prominently affected. Understanding and leveraging new digital technologies, alongside enhancing digital capabilities, are crucial for researchers in this domain. In qualitative research, it is essential to understand the latest digital technologies and skills. Tools like CAQDAS have significantly changed the way we approach qualitative research. Over the past years, the applications of CAQDAS have expanded, moving beyond its original purpose. However, it is critical to remember that traditional research methods remain vital even with these new digital tools. The two complement each other, enhancing the overall quality of research. Collaboratively cultivating and harnessing digital potential has become a hallmark of modern qualitative research methodology and data analysis. Digitality is pivotal for advancing the contemporary qualitative research methodology and CAQDAS. For researchers to excel in computer-assisted data analysis, proficiency in these fast-evolving digital tools and methods is imperative. The benefits of mastering digital technology are evident in the diverse applications and advantages of tools like CAQDAS. It is a platform that harmoniously integrates both old and new research methods. A robust understanding of digital skills is essential to use these tools to their fullest potential. Qualitative researchers must employ these digital tools and methodologies in the contemporary research landscape. --- Digital Society, Methods, and Possibilities Digitality is a foundational element in understanding the concept of a digital society. This term encapsulates modern societies' transformative shifts as they embrace and integrate information and communication technologies across daily life, including home, work, education, and leisure. As Lindgren notes, digital innovations are reshaping our societal structures, economic dynamics, and cultural landscapes at an unprecedented pace and magnitude. Digitality can be defined as the experience of living within a digital culture. This term is inspired by Nicholas Negroponte's book, "Being Digital," drawing parallels with the concepts of modernity and post-modernity , especially with the Digital Society approach. The Digital Society, as defined by Schwarz , is an emergent and interdisciplinary research domain that arises from integrating advanced technologies into our societal fabric and cultural norms. Central to this evolution is a need to grasp the profound shifts in understanding and studying society. This includes a deep dive into how technological transformations influence our lives, such as private and social interactions, education, governance, democratic processes, and business. As the scale and dynamics of these technological changes evolve, so does the methodology employed in social sciences, especially in qualitative research. Research in the 1990s began to delve into the implications of digitality and digital interactivity. Scholars explored the immediacy and omnipresence of digital communication, the interactive and participatory characteristics of digital media, and the trend towards "shallow" information searches that are quick and surface-level. They may not dive deep into the topics. These discussions share roots with Postmodernism, acknowledging the media's profound influence on identity, culture, and societal structures. They follow the tradition of postmodernism, assuming that media plays a crucial role in forming personality, culture, and social order; they diverge significantly from analogue critical theory. They highlight a departure from traditional analogue critical theories in that audiences can produce new texts that support the actions of other participants rather than just their idiolect, implying that in the digital age, everyone has the potential to be a creator or influencer. In this digital age, audiences are not just passive interpreters of the content; they actively create new content, influencing and shaping the behaviours and perspectives of others. Today, digitality primarily manifests in the ability to store, search, and categorise information, exemplified by tools and platforms such as the World Wide Web, Google search engines, and Big Data repositories. It also facilitates communication via mobile phones, blogs, vlogs, YouTube, and email. However, the digital era has not come without its drawbacks. Issues like computer viruses, loss of anonymity, the spread of fake news, and spam emails plague this information age. Thanks to digital possibilities and advances, our society, economics, and culture are changing and revolving in ways we have never seen before. Mobile technologies, Cloud collaboration, Big Data analytical systems, Natural Language Processing, Text Mining, Neural Networks algorithms, and the Internet of Things offer incredible and unseen individual and social opportunities, driving economic growth, improving citizens' lives and efficiency in many areas of our lifeworld, i.e., transforming the way we live and interact. These innovations drive economic progress, enhance the quality of life for individuals, and boost efficiency across diverse sectors, from education and health services to transportation, energy, agriculture, manufacturing, retail, and public administration . Beyond these tangible benefits, digitality also plays a transformative role in governance and policymaking. Digital tools empower policymakers with data-driven insights, fostering more informed decision-making. Furthermore, these technologies stimulate citizen engagement, promote greater transparency, and enhance the accountability of governing entities. Notably, the widespread accessibility of the Internet holds the promise of strengthening democracy, championing cultural diversity, and safeguarding fundamental human rights, such as the freedom of expression and access to information and connectivity. To better understand how these technological changes affect our social and private life, education, science, government, democracy, or business, we must also understand how their scale and dynamics affect the contemporary research methodology of social sciences, including qualitative methods. This highlights the interplay between technology and research methodologies. The digital revolution demands that we critically evaluate changes in data collection techniques, analysis procedures, and qualitative theorizing. Discerning the potential advantages and challenges of digitizing qualitative research and computer-assisted analytical practices is crucial. Indeed, the rise of digital methodologies has opened the door to many qualitative digital possibilities. A case in point is the recent COVID pandemic, which, as noted by Wa-Mbaleka and Costa , compelled qualitative researchers to pivot from traditional in-person methods like focus groups and interviews to online platforms. Researchers can communicate with more people in less time in more difficult-to-reach regions. Conducting interviews in the comfort of participants' homes fosters deeper intimacy. They can leverage technology to record interviews, create transcripts, and use mobile devices to organise the sequence of answers. This transition represents a shift in research methods due to technological advancements. However, as Palys and Atchison emphasise, the rapid change to online qualitative research methodologies has been met with mixed responses from the research community. Some voice concerns, citing epistemological and methodological reservations against swift digitisation and Computer-Assisted Qualitative Data Analysis Software. Conversely, others view this as an opportunity to innovate in qualitative research and enhance their methodological and analytical prowess. This highlights the debate and differing opinions among researchers about the role and impact of technology in research. The evolution of research methodologies has always been influenced by technological advancements and the changing ways we interact with the world. The widespread adoption of digital technologies made their impact evident in academic research and exploration. This integration of technology and research opened up novel avenues, challenging traditional paradigms and forging new directions. As digital technologies became more pervasive, their influence seeped into academic research and investigation. Around 2007, internet-related research underwent a significant transformation, often called the "computational turn" or data study . This "computational turn" describes adopting new techniques and methodologies from computer science and its associated fields. At this point, the Internet ceased to be viewed as a distinct "cyberspace" or an extension of offline society. While the digital divide among Internet users persists, there is no return to "virtual" research in the old style. Instead, the Internet began to be studied as collections of different social and cultural data, a space for communication and interactivity, which can revolutionise our understanding of collective human behaviour . In this regard, two critical articles were published: "A twenty-first-century Science" and "Computational social science" . Their authors discussed how we could study societal conditions and cultural preferences with Internet data. Most research on human interaction has been based on selected data relating to thematically focused case studies. Digital technologies offer an unusual, second-by-second picture of interaction over extended periods, providing information about relationships' structure, dynamics, and content . The research shift is from individuals to societies, individual to collective thinking, and single behaviour to social patterns, without limiting the number of study participants. The term computational turn refers to the process by which new techniques and methodologies are drawn from computer science , and related fields are implemented in the humanities and social sciences. They help to aggregate, manipulate, and manage structured and unstructured data. In social science, two terminologies relate to this turn: Social Science Computing and Computational Social Science . CSS is referred to as the field of social science that uses computational approaches in studying social phenomena. SSC is the field in which computational methodologies are created to assist in explanations of social phenomena. Digital methods help us to study social change and cultural conditions using various online data leveraging technologies to gain deeper insights into societal trends and patterns. These methods can use, for example, computational algorithms embedded in digital devices or computer language objects such as HTML or XML hyperlinks, tags, timestamps, likes, shares, and retweets to learn how people communicate, their opinions, and how they behave online. Digital methods are part of a computational turn in the contemporary humanities and social sciences. They are positioned alongside other recent approaches, such as cultural analytics, cultural studies, webometrics, and altmetrics, where distinctions are made between the data types and the algorithms . Their versatility allows the development of new strategies in computer-aided qualitative data analysis. With the increased computing power of computers over the last few years, together with the growing amount of cultural data now available in digital form, computer software can analyse an unlimited number of textual and visual data contained in corpora. This is a new dimension of thinking in narrowly focused, contextual data analysis and qualitative research methodology. The field of digital qualitative research is becoming a rapidly growing multidimensional and multifaceted research area. More and more researchers are addressing the social, cultural, political, anthropological, and other dimensions of Computer-Mediated Communication or using CMC to generate, collect, and analyse field data. Digital methods provide various research strategies for dealing with online data's temporal and unstable nature. In addition, these methods have been successfully used to identify the problems with online data, such as the unsustainability of web services and the instability of data streams, where APIs are reconfigured or cease to function. Qualitative research practices in the digital world can be supported by digital tools at every step of the research project, from the data collection, transformation, and data analysis to the outcomes. These observations combine three elements: digital technologies and their possibilities ensure doing new things that the qualitative research community has never undertaken before and doing better things than it has always performed; b) the social dynamics triggered, supported, and fuelled by the development of digital technologies and the implications they have for sampling and social research; and the possible implications of these social and technological changes for the development of the field of qualitative research . --- Datafication, Digital Humanities, and New Analytical Approaches The digital era brings datafication , permeating the very fabric of qualitative research. The widespread digital integration between traditional and new approaches has reshaped how we understand and interpret information. This constant influx of data derived from numerous digital sources challenges traditional methodologies and beckons for novel approaches to qualitative inquiry. Datafication-the constantly increasing amount of data in daily life and the improvement of analytical techniques -represents a paradigm shift in our analytical mindset . This makes theorising redundant in discovering knowledge about the regularities that govern society. In essence, we are transitioning from assumptionbased models to data-driven understandings. Big Data, Digital Humanities, and other new approaches to data analysis help produce meaningful knowledge about complex social phenomena without the need to formulate hypotheses. The data are supposed to speak for themselves, free from theoretical limitations or researchers' assumptions. It is a move towards a more organic interpretation of data. Big Data and new digital technologies help researchers focus on finding causes by looking at related data. The answer to why becomes less important than the search for the answer to what. This represents a profound shift in how we approach research questions. The aim is not to discover the causes of phenomena and processes but to look for the connections and relationships between the data, codes, categories, or concepts, as in qualitative data analysis. Without digitisation and datafication, there would be no Big Data and the modern CAQDAS, combining interdisciplinarity and multiparadigmacity in qualitative data analysis and research. The digital transformation has, in many ways, revolutionised the landscape of qualitative research. The breadth of contemporary approaches to qualitative data analysis can seem daunting even to experienced social scientists or researchers, especially as they move from the general analytic strategies used in qualitative research to more specific approaches for different types of qualitative data, including interviews, text, sounds, images, videos, and so-called virtual data. These diverse datasets call for nuanced methods of interpretation and understanding. However, general observations regarding implementing new digital technologies into the methodology of computer-assisted qualitative data analysis and qualitative research practices require comprehensive solutions regarding data archiving, data security, or computational capabilities. As we innovate, we must ensure the integrity and security of our data and methodologies. Big Data, Digital Humanities, and new technologies introduce a novel epistemological perspective on designing and implementing social research. The rise of Big Data means that researchers are now confronted with far larger qualitative data sets than before. This shift has changed how we perceive and approach the gathering and analysis of data. Knowledge in such research is not solely derived from testing theories based on relevant empirical data. Instead, data, digital methods, and advanced algorithms, especially from Natural Language Processing , have become paramount sources of cognition. Utilizing computer data analysis, mainly via NLP tools, ensures that qualitative research is conducted more systematically and comprehensively. NLP, a subset of artificial intelligence, enhances the analysis by facilitating automated sentiment and topic evaluations. Our understanding of the social world emerges from these tools and data sources. Traditional methods of theory testing are now complemented by insights gleaned directly from the vast amounts of data processed using digital tools. Rob Kitchin astutely noted this transformation. He emphasised that this shift affects the broader scientific community, not just qualitative research. Kitchin underscored the profound implications of these evolving methodologies on the world of science ). We are witnessing a multidisciplinary, digital paradigm that cannot be solely defined in terms of traditional scientific cognition. This paradigm champions CAQDAS for adept data management and coding. With the help of NLP tools, this method can easily manage data from many different sources, making more data available for qualitative research. This evolution suggests that the traditional notion of a "one-size-fits-all" approach to science is under revision. Instead, we now have a methodology that can easily handle vast qualitative data sets, bridging the qualitative-quantitative divide and fostering the rise of mixed-methods research. The digitisation process in research and analysis showcases the diverse strategies inherent in qualitative research and highlights ethical considerations concerning data privacy, consent, and transparency. As digitisation becomes integral to research, it reflects qualitative researchers' myriad techniques and perspectives, ensuring the data is responsibly collected, stored, and analysed. In a Digital Society, qualitative research methodology can be implemented by extracting data from pre-existing digital platforms like forums, social media, and websites through web scraping techniques, or by employing digital tools designed for researchers that facilitate direct interaction with participants in their online environments. Examples of such tools include web-based software for conducting interviews and software for online interview transcription. Moreover, considering modern qualitative research, we must consider the digital possibilities opening up as these platforms and tools redefine how we interact and convey information because of the virtual revolution of portable computing power brought about by the different mobile devices like smartphones, tablets, and wearables, and also by the digital possibilities generated by business trendsetters such as Apple, Facebook, or Google, and their respective apps which have millions to billions of users, making them significant data sources. The digitally managed research process requires an understanding of the impact of digital technologies on all aspects and phases of design, implementation, coding, and analysis and the dissemination of qualitative research results, which means considering both the advantages and challenges posed by these technologies. Along with the datafication and digitisation of qualitative research and greater collaboration, their methodology is changing, adapting to the dynamic nature of the digital age and the language of data analysis and research practices , which requires clarification and classification to ensure consistent understanding and application among researchers. Using digital tools in qualitative social science research is not necessarily new but appears to be steadily increasing, highlighting the growing trust and reliance on these tools. Digitality helps to research without time and space limits, offering researchers unprecedented flexibility and reach, blurring between quality and quantity-on the way to digital mixed methods. These changes, in turn, lead us to a discussion on the validating standards of practising digital qualitative research, emphasizing the need for rigour and integrity in the digital era . Digital qualitative methodologies not only introduce challenges in data management but also raise essential queries regarding the genuineness and reliability of data obtained through digital means. Achieving a delicate balance between capitalising on digital advantages and upholding research integrity necessitates a profound comprehension and reflective implementation of these digital techniques. In facing these challenges, researchers must continuously strive to reconcile cutting-edge digital methodologies with fundamental research ethics, research principles and qualitative data security. Ensuring robust digital data security has become pivotal in the era of datafication and digitisation, particularly in CAQDAS and qualitative research. The prevalent use of interactive collection methods, such as online surveys and computer-assisted interviews, necessitates stringent protocols to safeguard data and uphold the anonymity of respondents, especially amidst escalating concerns over cyber-attacks and data breaches. It's worth noting that technological means to secure data, which have become paramount, include robust cybersecurity measures and well-established procedures for data management and researcher training in ethical data handling. Furthermore, adherence to various data protection regulations, notably the General Data Protection Regulation in the European Union, is imperative, underscoring the necessity of obtaining explicit and informed consent, practising data minimisation, and having a legitimate basis for data collection, storage, and usage. Non-compliance with these regulations can result in significant fines and damage to reputation. Therefore, a clear understanding of these regulations is not just a legal ne-cessity but also informs ethical research practices, ensuring that participant data is treated with the utmost respect and integrity throughout the research process. Anonymisation, which involves meticulously altering personal data to prevent the identification of subjects without additional information, emerges as a crucial tool. Researchers, therefore, must ensure that data, even when stripped of identifiable markers or replaced with pseudonyms, remains thoroughly anonymous and is immune to reverse-engineering tactics that could compromise participant identity. Employing technological solutions to bolster data security, such as frequent software updates, the use of strong, unique passwords, and the application of encryption in data transit and storage, is not merely an operational requirement but also an ethical obligation. Moreover, the implementation of these technological and procedural safeguards must be transparently communicated to participants, ensuring that they are fully aware of how their data will be protected throughout the research and beyond. Some commercial CAQDAS programs, including Atlas TI, NVivo, Maxqda, and webQDA, offer solutions facilitating collaborative analysis, like client-server or cloud-based working spaces, enabling researchers to collaborate without jeopardising data security. However, ensuring data security in these collaborative platforms rests with the researchers or the analytical tool providers. Thorough vetting of third-party providers and software used in the research process is essential to mitigate risks. Prudent selection of digital platforms for research, hence, not only safeguards data but also enhances the reliability and validity of the research findings, ensuring that they are derived from a secure and stable digital environment. While desktop solutions remain available, the burgeoning phenomenon of datafication and associated computational demands deem traditional data storage and analysis methods progressively unreliable. Consequently, researchers seek more potent and secure data-handling solutions. In conclusion, as the digital landscape continues to evolve, researchers must balance leveraging advanced digital and collaborative tools and maintaining rigorous data protection, consistently placing the ethical treatment of participant data at the forefront of their practices. --- The Importance of Researcher's Digital Skills We can access various innovative tools and methodologies in today's digital age. However, their practical use is less about the tools themselves and more about the professional skill set that wields them. These skills, encompassing technical, soft, and ethical dimensions, form the bedrock of quality research in our digital era . A primary component of this skill set is technical proficiency. Modern professionals should be familiar with various software and platforms and adept at leveraging their intricate features and functionalities. Alongside this, technological know-how is a crucial aspect of digital literacy. This goes beyond just using digital tools; it involves discerning which tool is best suited for a specific task and evaluating the credibility of online resources. While AI tools bring advanced capabilities, they are not without flaws . This is where the skill of AI interpretation becomes indispensable. Professionals must be competent in reviewing AI-generated outputs, identifying potential inaccuracies, and ensuring precise products, like transcriptions. Understanding cybersecurity fundamentals is paramount, given the increasing cyber threats in our digital-centric world. Professionals should be versed in best practices related to data encryption, secure data storage, and safe data transmission. Soft skills, which can sometimes be undervalued next to technical skills, are equally essential. In the realm of virtual interactions, active listening becomes vital. Professionals must be attuned to subtle vocal nuances, pauses, and inflexions without physical cues. Effective communication is also paramount, especially in virtual settings where physical cues are lacking. Moreover, the rapid advancements in the digital world necessitate professionals to be adaptable, ready to learn, and pivot as new tools and methodologies come to the fore. As digital tools erase geographical boundaries, cultural awareness and empathy become more critical. Professionals must navigate these global interactions sensitively, understanding the varied cultural nuances to foster genuine exchanges. Efficient project management is another crucial skill, especially when dealing with digital resources, virtual teams, and online tasks. This ensures that projects progress seamlessly, even in dispersed digital environments. Ethics is pivotal in the digital age, accompanied by unique data privacy and transparency challenges. Professionals must maintain integrity and uphold stringent ethical standards. Coupled with this is the commitment to ongoing learning. Professionals should stay updated via regular workshops, webinars, and training sessions as the digital landscape evolves. Of course, collaboration, too, is essential. While digital tools have made it easier to bridge geographical divides, they also require a solid collaborative spirit to ensure smooth teamwork, even when teams are globally dispersed. Finally, the significance of feedback in this digital age cannot be overstated. Professionals should proactively seek, analyse, and utilise such input as a driving force for methodological refinement and overall growth. In an increasingly digitised world, where almost every facet of daily life intersects with technology, mastering the tools of CAQDAS and digital methods becomes imperative for qualitative researchers. However, a distinct disparity in the grasp of analytical and IT skills is evident among many in the field . This mismatch impedes fully realising digitisation's potential in the social sciences. This potential, which lies in processing vast amounts of data and identifying patterns at a pace unimaginable a few decades ago, adds a layer of depth to research. The initial expectation for researchers might have been a primary focus on their study area, but the rapid pace of technological advancements has set a new paradigm. This shift has moved from pen-and-paper data analysis to a reliance on complex digital tools and software. Many researchers without formal training in IT or computer sciences find themselves on a steep learning curve. Navigating this curve requires patience, determination, and often a willingness to venture outside one's comfort zone. The self-learning process might be daunting, but it is propelled forward by these researchers' innate curiosity and analytical prowess. By immersing themselves in the dynamic realm of technology, researchers bridge the competency gap and bring innovative solutions that defy the constraints traditionally associated with digitisation. These innovative solutions might range from new data visualisation techniques to implementing machine learning algorithms in qualitative research. This technological era has emphasised the pivotal role of interdisciplinary collaboration, particularly between IT and qualitative research. Melding the methodologies and tools from fields like Digital Humanities, Corpus Linguistics, Big Data Analysis, and Computer Science has redefined the contours of qualitative data analysis. This fusion allows for harnessing machines' computational power and precision with a nuanced understanding of human behaviour, facilitating deeper research explorations. This amalgamation enriches the research process, allowing for more profound insights and broader applications. Such expanded scope is akin to opening a previously locked door in the mansion of knowledge. Yet, even as the horizons expand, challenges persist. For instance, the realm of collaborative technologies or the intricacies of database systems often remains enigmatic for many qualitative researchers. In the digital age, a researcher is not merely expected to possess expertise in data analysis methodologies or field experience; their skillset now must encompass database management and even touch upon programming. They are expected to wear multiple hats, transitioning seamlessly from a core researcher to a pseudo-technologist. Integrating CAQDAS programs with languages like Python or R is a testament to this shift. Such advancements cater to tasks like data pre-processing, automatic coding, and deeper analyses. However, lacking these nuanced skills compels qualitative researchers towards more collaborative avenues. Recognizing one's limitations and seeking partnerships becomes crucial. Teaming up with professionals from diverse domains like computer science and mathematics fills the knowledge gap and brings a confluence of perspectives to the research, enriching it manifold. In essence, the future of qualitative research hinges upon a harmonious blend of traditional methodologies and the evolving digital toolkit. Embracing this change, researchers stand at the cusp of a revolution in how qualitative data is collected, analysed, and interpreted. --- CAQDAS, Digitality, and Collaborative Working We live in digital culture, meaning digitality encourages us to connect, collaborate, communicate, and participate in global social networks. One of our "digital culture" core beliefs is that digital networks encourage excellent connectivity, collaboration, communication, community, and participation. This can be seen in social and news media discourse . But, it becomes even more visible and tangible in the computer-aided analysis of qualitative data software, which is becoming more collaborative, systematic, and interactive. In social sciences, there is a shift from the digitality of qualitative research to collaborative data coding, analysis, and thinking . We have commercial and online software for collaborative data collection, collaborative data coding, collaborative analysis, collaborative thinking, collaborative writing, etc. The qualitative research process can be conducted digitally and collaboratively on the web or desktop software: interview transcribing, project and task managing, codebook preparing and coding, data analysis and modelling, interpretation and theorising, and final writing and representing findings. With the digitalisation of qualitative research and the prevalence of CAQDAS among researchers, a new style of thinking and approach in qualitative data analysis is taking shape based on collaborative methodology and teamwork. There has been a growing interest in research collaboration in recent years, and different terminologies have emerged to describe this phenomenon. In a descriptive literature review conducted by Yang and Tate , they explored the field of cloud computing research and proposed a classification structure. Similarly, other scholars have been inspired by the potential of web-based collaboration and have used terms similar to those employed in this study. For example, Bröer et al. introduced the concept of collaborative interpretation, which involves researchers working together to interpret and analyse data, leveraging the power of online collaboration tools. This approach acknowledges the benefits of collective intelligence and the diverse perspectives that can be brought to the interpretation process. Another related term is online collaborative research, which refers to research conducted openly and collaboratively, often leveraging online platforms and communication tools. This approach embraces the principles of openness, inclusivity, and shared knowledge creation, allowing for greater engagement and participation from a diverse range of researchers. These terminologies reflect the evolving nature of research collaboration and the increasing reliance on digital technologies to facilitate collaboration and knowledge sharing. The emergence of cloud computing and web-based platforms has expanded the possibilities for collaboration beyond geographical boundaries, enabling researchers to connect and collaborate globally. It is important to note that while the specific terminologies may vary, the underlying principles and objectives remain consistent. The goal is to enhance research collaboration, foster innovation, and leverage the collective expertise of researchers to advance knowledge and address complex challenges. These terminologies offer valuable insights into the various dimensions of research collaboration and provide a foundation for further exploration and understanding in this rapidly evolving field. The literature review shows that collaborative analysis and research can be carried out on three primary methodological levels: interdisciplinary , international , or interpersonal: senior-junior , insider-outsider , and academic-practitioner . Collaborative analysis of qualitative data seems to hold a variety of effects, from a more informed, complex, or helpful digitally supported qualitative data analysis leading to new interpretations, transcending present knowledge, or creating possibilities for individual learning or improving new analytical skills . Such potential benefits are not risk-or cost-free. Risks and costs, like the benefits, are derived from the confrontation of diverse perspectives and research methodologies. In that case, collaboration needs institutional support and flexibility, straightforward working procedures, and social relations, promoting open research debate without threatening the researcher's identity. All may help to alleviate the potential risks of collaborative analysis and research. Of course, effective collaboration in qualitative research depends on the classes of research and analytical methods applied, the number of people participating in the project, or the project scale. The digitisation that permeates our daily lives further blurs the distinctions between commercial software programs. These now boast enhanced analytical capabilities and foster a more collaborative environment for research . The digital transformation has reshaped the dynamics of scientific work, drawing it closer in nature to conventional business projects. Emphasizing the pivotal role of digital tools, this article delves into the evolution of computer-assisted qualitative data analysis. It underscores the importance of collaboration and harnesses digital advancements in modern qualitative data collection and analysis. The growing evidence suggests that digital tools are invaluable in bolstering collaborative efforts and refining the qualitative research process. For instance, Costa et al. introduced the 4C collaborative work model, outlining the collaborative capabilities of the qualitative analysis software webQDA. Echoing the importance of collaboration, Davidson et al. stressed the necessity for thriving communities of practice. These communities are crucial in facilitating the development and application of digital tools in qualitative research. Furthermore, Crichton posited that digital tools not only streamline the tasks of qualitative researchers but also enrich the data, offering more significant depth. Reinforcing this viewpoint, Paulus et al. delivered a comprehensive overview, showcasing how digital tools can be leveraged at various junctures of the research journey. In practice, research collaboration and collaborative analysis have numerous methodological advantages. Three notable benefits are analytical credibility, methodological reflexivity, and intersubjective thinking. Credibility is a synthetic outcome of the primary analytical process stages, including data coding, investigating the code's relationships, developing interpretations, and qualitative theorizing. Measuring methodological reflexivity or intersubjective thinking presents a challenge and becomes more apparent in later data analysis stages than credibility. Thus, we can demonstrate what does not work within the team coding process. Coding is at the core of qualitative analysis, but its effectiveness depends on the size of the volume of data we have. This is an iterative process in which the structure of the codes is dynamic and undergoes a continuous transformation with the researchers delving into semantic contexts and the semantic structure of data. With the digitisation of qualitative research and the development of new CAQDAS functionalities, greater emphasis is placed on the data validation procedure and ensuring the reliability of coding . To verify this reliability, we use the inter-coder agreement procedure. Computing the compatibility of coding is used to compare coding consistency between several coders. Such a procedure can help uncover the differences in interpretation, clarify equivocal rules, identify ambiguity in the text, and finally quantify the level of agreement obtained by coders. In practice, we can uncover the differences in interpretation, clarify ambiguous rules, identify ambiguity in the text, and ultimately quantify these coders' final level of agreement. Unfortunately, applying inter-coder agreement procedures often involves requirements or assumptions incompatible with the qualitative data analysis processes. At least two compatibility problems can be identified: the codebook problem and the segmentation problem . Generally, the CAQDAS software may use the four inter-coder agreement criteria based on the code occurrence, frequency of using code, importance , and overlapping codes in a text . The methodological, collaborative reflexivity is overcoming individual rationality to collective rationality and incorporating local understanding into global understanding . This involves the third aspect of collaboration, creating an intersubjective space for open dialogue, discussion, and perspective-transcending knowledge. This process may be described as collaborative knowledge production in computer-assisted qualitative data analysis and digitally supported research. In a collaborative context, the researcher must be willing to work in a framework of mutual support between peers and participate in the synergy of the group to organise complex tasks via communication. The collaborative process offers, in particular, the possibility of interacting effectively and allows the development of analysis, synthesis, problem-solving, and evaluation skills. Thus, as a source of encouragement and support, the collaborative process presents itself as a means of learning and enrichment, in which the sphere of collaboration does not supplant the sphere of action of the individual. However, for the researcher to adopt this attitude, they must see themselves in the collaborative approach and have the means to enable, promote, and facilitate collaboration. --- Summary: Going Digital and Collaborative but Staying Qualitative As the wave of digitisation and datafication intensifies, the domain of social sciences, particularly the qualitative research methodology, is evolving into a predominantly datadriven sphere. Intriguingly, this shift aligns with the core principles of grounded theory. What is paradoxical about this transition is that data scientists now frequently explore questions that were once the exclusive domain of sociologists. These data specialists utilise vast datasets and employ methodologies that diverge considerably from the conventions of social sciences. Such a shift exposes traditional social sciences, such as sociology and anthropology, to the potential risks of becoming overshadowed. This looming risk is amplified by the surge of digital research techniques demanding advanced computational knowledge. Further compounding the situation is the heightened rivalry from the corporate realm, which often has superior access to data. This potential sidelining is especially pertinent when discussing qualitative research. However, it is crucial to underscore that sociologists and anthropologists, unlike many data scientists, possess a rich history and expertise in qualitative research. With the exponential growth of quantitative datasets, extracting meaningful insights without integrating qualitative methodologies becomes a formidable challenge. Thus, the current landscape underscores the importance of "Thick Data" amidst the prevailing "Big Data" epoch . The ascendency of digitality and digital technology is fundamentally altering the practice of qualitative research and computer-assisted data analysis. Becoming proficient in qualitative research methodologies and computer-assisted data analysis is closely linked to grasping and applying new digital technologies and enhancing digital skills. The mediation of research and analysis via digital technology is becoming a norm, subtly shifting perceptions of qualitative analysis and its execution. These changes resonate with our research initiatives' epistemological and ontological foundations. In the past decade, qualitative research, especially multimedia digital data, has benefited from developing and advancing software tools that support most core qualitative methodological techniques. Contemporary qualitative research is no longer limited to small interviews but calls for concerted cooperation among researchers. The digitalisation of qualitative research and the growing prevalence of CAQDAS is forging a new paradigm in qualitative data analysis built on collaboration and teamwork . CAQDAS and other new digital tools illustrate how technology can bolster the research process, offering time efficiency and adding substantial depth to qualitative work. They facilitate every phase of the research process, drawing on various tools, possibly already familiar to many researchers and providing practical case studies drawn from actual research. Whether we use traditional or digital, computer-assisted methods, it is essential to recognise that qualitative data analysis mandates the careful, systematic, and thorough management of substantial text data, such as interviews, notes, and internet data. Thus, the prerequisite for reliable qualitative analysis is efficient and consistent data management, for which adopting digital technologies and appropriate CAQDAS software is natural and obvious. Qualitative analysis, which is inherently complex and multifaceted, begins with fieldwork and involves a carefully planned sequence of activities: conducting interviews, transcribing recordings, reading transcriptions, retrieving phrases, coding text and images, analysing data and visualising it. Due to the nature of this fieldwork flow, CAQDAS and other digital solutions facilitate a seamless transition through the various stages-from the subtleties of the transcription process to the complexities of data analysis to the formulation of theory. Software and digital tools such as CAQDAS empower researchers by streamlining the transcription process, enabling collaborative research, and supporting the development of robust qualitative analysis models. In the broader perspective, integrating digital tools into qualitative research is not a mere convenience but a necessity. The objective is clear: to attain a nuanced comprehension of the data and rigorously evaluate the effectiveness of the analytical strategies employed. As we delve deeper into the digital age, digitisation, collaboration, and relationality unmistakably define the essence of contemporary computeraided qualitative data analysis and qualitative research methodology on a broader scale. The transformative effects of digitisation, datafication, and innovative technologies have been profound. They have reshaped qualitative research's data collection, processing, and analysis methodology. With the advent of these modern information and communication technologies, a fresh era is unfolding-one marked by unprecedented interconnectedness, presenting researchers with diverse and previously unthought-of opportunities. This new era, empowered by these transformative technologies, has transcended the traditional confines of time and space, forging pathways for enhanced collaborative endeavours in research. Furthermore, it is worth noting the democratizing influence of these digital tools on the research landscape. Their ability to tap into vast virtual networks allows qualitative researchers to observe and actively immerse themselves in the investigative process. The spirit of collaboration, rooted in mutual reliance, collective synergy, and shared objectives, drives more successful and impactful research outcomes. Yet, an evident paradox exists: while the technological potential is boundless, its full exploitation is hampered by researchers' reticence. Whether overlooking the available software or an inability to fathom the collaborative potential of these tools, the root cause often traces back to a fundamental unfamiliarity and lack of expertise with these digital resources. This disconnect between the technological possibilities and their adoption is glaring. To bridge this chasm, a dualpronged strategy is essential. Practically speaking, amplifying awareness about these tools' multifaceted capabilities and dependability is urgently needed. On a cultural front, the research community must evolve, transitioning from the siloed, individual-centric research ethos to a more inclusive, dialogic model that is holistic in its approach. In conclusion, the emergence of novel information and communication technologies has enhanced networking capacities and revolutionised collaborative work in qualitative research and computer data analysis. Nevertheless, challenges arise due to researchers' insufficient knowledge and utilisation of these tools. Via focused instrumental and cultural interventions, researchers can fully harness the transformative potential of these technologies, aligning with the prevailing trends of datafication, digitisation, and collaboration that underpin computer-assisted qualitative data analysis and qualitative research methodology. --- Data Availability Statement: Not applicable. Soc. Sci. 2023, 12, 570 ---
The differentiation of contemporary approaches to qualitative data analysis can seem daunting even for experienced social science researchers. Especially when they move forward in the data analysis process from general analytical strategies used in qualitative research to more specific approaches for different types of qualitative data, including interviews, text, audio, images, videos, and so-called virtual data, by discovering the domain ontology of the qualitative research field, we see that there are more than twice as many different classes of data analysis methods as qualitative research methods. This article critically reflects on qualitative research and the qualitative computer data analysis process, emphasising its significance in harnessing digital opportunities and shaping collaborative work. Using our extensive analytical and research project experience, the last research results, and a literature review, we try to show the impact of new technologies and digital possibilities on our thinking. We also try to do the qualitative data analysis. The essence of this procedure is a dialectical interplay between the new world of digital technology and the classic methodology. The use of digital possibilities in qualitative research practices shapes the researcher's identity and their analytical and research workshop. Moreover, it teaches collaborative thinking and teamwork and fosters the development of new analytical, digital, and Information Technology (IT) skills. Imagining contemporary qualitative research and data analysis in the humanities and social sciences is difficult. Opening to modern technologies in computer-based qualitative data analysis shapes our interpretation frameworks and changes the optics and perception of research problems.
INTRODUCTION Child, early and forced marriage and unions , defined by the United Nations as marriage or informal union before the age of 18, is a global problem that violates the rights of children, curtails their schooling, harms their health and constrains their futures. 1 Prevalence among girls ranges from 2.5 times higher than for boys in East Asia and the Pacific to 10 times higher in West and Central Africa. 2 The great majority of CEFMU takes place in low-and middle-income countries , with the highest prevalence in sub-Saharan Africa and South Asia. 3 The past 10 years have seen an increased focus on the development of policy and --- STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ Our study is a systematic review that was preceded by a broader scoping review focused on research on child marriage more broadly. ⇒ While the study followed guidelines for the conduct and reporting of systematic reviews, it was not based on a prespecified protocol. ⇒ The two-stage approach prolonged the timeline for this study, since the scoping data collection effort took place first, followed by a systematic selection based on programmes addressing social norms. ⇒ The substantial heterogeneity of the reviewed studies regarding their methodological quality, programme components and strategies, outcome measures and level of detail poses challenges to comparative analysis. ⇒ Our review included only experimental study designs for the evidence they provide on programme effectiveness; studies with other designs and a more qualitative approach could offer insights into the mechanisms of norm-change programming specifically. Open access programmatic efforts to end child marriage, reflecting the inclusion of the elimination of child marriage as Target 5.3 of the Sustainable Development Goals. 4 While initially much of this attention centred on the development of laws and policies that establish 18 as the minimum age at marriage, the limitations of this approach have increasingly come into focus. 5 Current research reflects an interest in identifying and addressing the root causes and drivers of CEFMU, including the norms that reinforce girls and women's low value in society, regulate their sexuality and limit their autonomy. 6 Norms related to child marriage are fundamentally about gender and power. 7 A large body of research has now established the importance of social norms in shaping child marriage 7 8 , particularly those related to gender and power, sexuality and life aspirations. 9 Discriminatory norms perpetuate the view of marriage as the only viable alternative for girls, so working to transform inequitable gender norms and provide education and employment opportunities for girls can improve child marriage outcomes. 10 As a result, addressing norms is a logical and relevant approach to preventing and mitigating child marriage. Norms are relevant in every region, for example, in Latin America 11 and in South Asia and sub-Saharan Africa. 12 Preventing CEFMU and mitigating its effects requires interventions that address these root causes by challenging norms and unequal power relations within families, communities and institutions. Norms interventions in the child marriage space typically work to shift social expectations, transforming how girls think about themselves, how family members and communities think about girls and how institutions treat girls. The norms that drive child marriage are not always explicitly about child marriage but include concerns about safety, worry about girls walking unaccompanied to school, whether school is a valuable investment for girls given that they will grow up to be wives and mothers, gendered expectations about domestic and public roles, and the 'ruin' that can come to girls as a result of any sexual activity before marriage. 13 Consequently, programmatic approaches to this work are very diverse. Some interventions target education, showing the role that extending access to schooling can play in shifting the norms that uphold and sustain child marriage in Nigeria and Uganda. 14 Norms regarding sexuality and access to contraception rather than child marriage per se seem especially important in Southern Malawi, which has the lowest median age of first marriage in the country 15 ; and in Zambia, where the government, civil society partners and young people are working together to challenge norms related to sexuality, limited sexual and reproductive health information and services and limited future aspirations and create opportunities. 16 A study in LMICs of parenting programmes designed to prevent violence against adolescents found that those aiming to prevent sexual violence or child marriage generally focused on challenging prevailing norms . 17 This lack of consensus has made assessing the overall impact of intervention approaches focused on social norms on child marriage extremely challenging, particularly as these are often packaged together with activities aimed at shifting other behavioural or health outcomes. This systematic review seeks to address this gap by exploring the contribution interventions that work specifically to shift norms make to preventing child marriage. Our research assesses the scope, range and effectiveness of interventions that work to shift norms to prevent child marriage. We analyse intervention characteristics of programmes and highlight key characteristics of success. Based on our findings, we present a range of recommendations for practice and future research. --- METHODS --- Search strategy and selection criteria The study design follows established policies and guidelines for conducting systematic reviews and used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for reporting. 18 This systematic review builds on an earlier, broader scoping review of child marriage evidence, 4 for which the detailed methodology is published elsewhere and was registered with Open Science Framework . 19 The scoping review was designed to act as a precursor for a series of systematic reviews that address specific questions related to interventions focusing on child marriage, of which study is one . Briefly, the broader scoping review involved searching 18 academic databases and included articles with a focus on child marriage from all geographical settings and in four languages for the period January 2000 to December 2019. Research produced prior to 2000 was not included for three main reasons. First, child marriage was not commonly regarded as a human rights issue before 2000, nor did many interventions focus on the practice. Second, the field's understanding of both child marriage and social norms has evolved significantly over the past two decades with regard to the conceptualisation, measurement and types of interventions that have been implemented to change either outcome. Finally, the number and rigour of evaluations of both child marriage and social norm change programming has dramatically improved over the past two decades, meaning that relatively few studies on these topics from the period before 2000 would meet the criteria established for quality in evidence generation today. As a result, including research conducted before 2000 would reflect a different and outdated general approach to CEFMU, social norm change programming and evaluation approaches. --- Open access The initial database searches for publications in English were conducted in January 2020. The English-language database searches included PubMed, PsycINFO, Embase, CINAHL Plus, Popline, Web of Science and Cochrane Library. To identify the grey literature, we conducted targeted hand-searches of the websites of 15 organisations engaged in work to prevent child marriage. To expand our database to cover the literature published after January 2020, we replicated the searches of the academic database in English through September 2021. We restricted this search to English because the intervention literature was not well represented in languages other than English. In September 2021, we also conducted a final targeted search using the terms 'child marriage', 'prevention', 'norms' and 'intervention' in Google Scholar to identify recent norms-related interventions designed to prevent child marriage . The targeted English-language update and Google Scholar searching initially yielded 10 studies . The use of Google Scholar was meant to address the reality that many normsrelated interventions are relatively recent and have yet to be published in peer-reviewed journals. In a departure from other reviews, we included highquality evaluations published in non-peer-reviewed reports. We employed this strategy also because the longer format of project reports often provides more detail on programme implementation. We supplemented the publications identified through our systematic review with additional documentation when it was available, for example, using information from a midline report to help flesh out information that may have been unclear in a final publication. The broader database of child marriage evidence published in English, Spanish, French and Portuguese included 1278 records. After reducing this to publications in English only that described child marriage prevention programmes, we were left with 349 studies. We independently screened this pool of studies identified through the database and targeted searching by title and abstract first , and then by full-text to include only those publications that evaluated interventions. In cases where two authors could not reach consensus on a study, the third author contributed to making the final decision. This process of screening for studies with evaluated child marriage interventions yielded 46 records. Finally, we assessed the eligibility of these papers by including only those evaluated interventions that measured social norms and child marriage. We included papers that highlighted norms in describing their theories of change and indicated their intention to address norms in their programmatic --- Open access activities. Following the approach of Watson, 20 and in keeping with the broader literature on what works to delay marriage, we elected to focus not only on interventions intended only to delay marriage, but also included those that would drive change in how girls might be viewed or would transform the life choices open to them and measured the impact on the timing of marriage. We identified 19 studies that measured both norms and child marriage outcomes. This group of studies was then further restricted to those using randomised controlled trials or quasiexperimental study designs, resulting in a final group of 12 studies featuring high-quality study evaluation designs and measuring both norms-related and child marriagerelated outcomes . Established guidelines for the inclusion of quasi-experimental studies were followed. Online supplemental table S1A-C present the databases and search terms used at the two stages of our search. Online supplemental table S2 summarises the inclusion/exclusion criteria we applied, and online supplemental tables S3 and S4 provide further detail on how evaluation and programmatic quality were assessed. --- Assessing methodological and conceptual quality and rigour The methodological and conceptual quality and rigour of each study was assessed through a combination of a scoring system, where studies received 'scores' based on pre-established criteria , and the expert judgement of reviewers. This approach was necessitated by the diversity of evaluation designs and implementation approaches used across the included studies, meaning that the scoring criteria and approach at times did not fully match the independent assessment of the reviewers. In these cases, the inclusion of 'real-world' assessments on the part of reviewers allowed a more accurate and holistic assessment of quality while retaining the scoring system as the foundation for classifying studies. In sum, this mixedmethod approach allowed for the use of a greater range of information in classifying studies while using standard approaches to attempt to minimise reviewer biases. A detailed description of this process for individual studies is provided in the documentation accompanying . The methodological quality and risk of bias for each study was assessed by building on the approach used by Kennedy et al 21 and Malhotra and Elnakib, 22 using seven specific criteria across five domains of methodological rigour: 1. Study design, attrition and sample size; Scores for individual items were summed, resulting in a scale potentially ranging between 0 and 7, though in practice the scores assigned to the selected studies ranged between 3 and 6. Using these scores, studies were initially assigned to three categories: low , medium and high quality. Two evaluators independently assessed each study across each domain, with discrepancies or conflicts resolved through discussion. As a result of these discussions, two additional categories were assigned: medium/high and low/medium, reflecting the nuances of interpretation between reviewers and inconsistencies in reporting. This resulted in two studies being reclassified . In addition to assessing the methodological rigour associated with the evaluation of programmes or policy in the included studies, we assessed the conceptual rigour of the implementation approaches each study employed in seeking to address child marriage through changing related social norms. This step is intended to help clarify how the programme tried to shift norms, aid in explaining any effect each may have had on norms and provides an additional assessment of internal validity for each included study. Because these programmes typically viewed changing norms around child marriage as crucial prerequisite for changes in child marriage behaviour, examining the rigour of the activities they employed to change norms is key to understanding the causal pathway between programme intervention and effect. Building on the definition of social norm change programming developed by the PASSAGES project, 23 we weighed each programmatic or policy approach according to the degree to which they met the following criteria: 1. Accurately identified, assessed and targeted specific norms driving child marriage. 2. Sought to achieve community-level change beyond the individuals directly targeted by programming or policy. 3. Engaged people at multiple social levels of the society. 4. Sought to confront power imbalances, particularly related to gender. 5. Actively created safe spaces for critical reflection by community members. 6. Aimed to create positive new norms or reinforce existing norms that protect against child marriage. Based on each of these attributes, we developed a simplified scoring system for ranking studies by the degree to which their underlying programme or policy reflected a broader norms-driven theory of change. It should be noted that while we attempted to use a range of sources to identify and describe programmatic activities that could be seen as designed to shift norms, we had to rely on the programme descriptions provided by the included studies. This scoring system is based on three specific criteria : 1. Did programme norm-change activities include immediate influencers or peers? 1=yes, 0=no. 2. Did programme norm-change activities include a broad range of members of the wider community? 1=yes, 0=no. 3. Did programme norm-change activities substantially include community groups beyond youth themselves? 1=yes, 0=no. Studies were given a score ranging from 1 to 3 and subsequently classified into 'Little/None' , 'Limited' and 'Comprehensive' quality. Two of the authors independently assessed the quality of each study, with discrepancies or conflicts resolved through discussion among the reviewers. While the primary focus of this study was the effectiveness of norms-related interventions, the included studies were analysed based on their evaluation and intervention characteristics, with the goal of better understanding the degree to which the activities included in the intervention deliberately and feasibly sought to change social norms around child marriage. The information extracted from each study included the evaluation methodology, period covered by the evaluation and by implementation and also the intervention location, implementing organisation, the purpose/goals of the programme, implementation activities, target populations for the intervention/ study, outcome measures capturing CEFMU and related social norms, results and the direction of the findings. As the focus of this review is on the effect of programmes on the social norms that underpin child marriage, we grouped studies by their focus on social norm change, based on their score on the conceptual quality/fidelity measure. This resulted in our creation of three groups: Little/no intentional norm change programming; Limited intentional norm change programming; and Comprehensive intentional norm change programming. The variation in the outcome measures, particularly in social norm change, precluded an assessment using common measures. Consequently, the impact of each intervention on both sets of outcomes is assessed in broad terms where studies were determined to have positive, negative, mixed or null effects. Only results that were statistically significant are included. The grouping of studies by conceptual quality/fidelity allows us to examine whether programme impact varies across groups in ways that suggest particular approaches are more impactful than others. --- Patient and public involvement None. --- RESULTS We begin by describing study characteristics of the studies included in this review that intended to address child Open access marriage norms and evaluated impact for both child marriage and social norms outcomes . These studies stated their intention to measure the following outcomes: preventing or delaying marriage before age 18, participation in the marriage decision, norms/attitudes related to child marriage. Collectively, these provide a more complete picture of the types of programmes being conducted in this area and appear at the end of online supplemental table S5. We mention the 19 studies because the disconnect between study aims and measurement is an important finding of our analysis. The 12 included studies were concentrated in India, 4 24-27 Bangladesh 2 28 29 and Malawi, 2 25 30 with the remaining studies distributed across other countries. The seven additional studies not included in this review [31][32][33][34][35][36][37] were similarly concentrated in three countries: Bangladesh, 2 35 37 Tanzania 2 34 35 and Mozambique, 2 31 35 with the remaining studies focused on other settings. The norms and attitudes the 19 studies aimed to address were disparate, as were programme activities and normative outcomes measured, which may reflect a lack of consensus in the field about which norms matter for CEFM and how best to measure outcomes. The intervention activities of these programmes are presented in online supplemental table S5. While every study included activities for girls, only two included activities that worked directly with boys 30 38 and only three worked directly with family members. [39][40][41] After girls, the second most common group on which activities worked was the community overall, with five programmes combining this work with their focus on girls. 24 25 30 38 39 Although all of the included interventions indicated their intention to address norms in their programmatic activities, they diverged across every dimension of our analysis. First, the norms and attitudes they aimed to address were quite disparate, describing efforts to shift norms and attitudes related to decision-making, gender roles , schooling, the right to refuse an arranged marriage, ideal age at marriage for girls and boys, appropriate age at first birth, aspirations for daughters to study beyond secondary school and levels of empowerment. Of the seven studies that were ultimately excluded, six included community members or caregivers and could therefore be considered to be aiming to change broader social norms but failed to measure both normsrelated and child marriage-related outcomes. Similarly, the approaches and activities in which interventions were engaged were highly variable. Safe spaces where girls could meet and talk with peers was an approach used by several interventions, 38 41 42 though the link of these activities to broader community or social norms was often poorly described. Other programmes were explicit in their engagement with family and community members, including health workers, seeking to change their attitudes toward adolescent pregnancy and early marriage and build their understanding of girls' needs and desires. 24 28 30 38-40 For example, the programme described in Egypt by Sieverding and Elbadawy reinforced safe spaces for girls with a curriculum for boys on gender roles and rights. 38 The IFS study in India, for example, explored the benefits of including parents and other community members to influence their attitudes regarding CEFMU, sexual and reproductive health and education. 24 Just as the norms-related programmatic activities are diverse, so too are the normative outcomes on which the programmes focused. Table 2 presents the key outcomes related to social norms related to CEFMU and CEFMU itself that were measured and reported in each of the 12 studies included in the final group for this analysis. Only 5 of the 12 studies measured attitudes or norms held by individuals other than the participant girls themselves, including caregivers and other family members 24 26 38 40 from specific groups of women. 27 Five studies measured gender attitudes, 24-26 40 42 predominantly among girls who participated in the programme themselves, while a further two studies measured overall empowerment. 27 29 Four studies specifically measured attitudes or knowledge around age at marriage, either among participant girls, 25 among other family members 38 41 or a specific group of women. 27 Two of the studies did not report any quantitative normative outcomes related to child marriage, although the programme descriptions emphasised norm change to a significant extent. 28 39 Variability in the normative outcomes, programmatic activities and measurement of impact makes it challenging to compare these norm-change programmes. We therefore created two classification systems that considered evaluation quality and normative engagement. For the 12 studies for which evaluation quality assessment was possible, 7 programmes were categorised as 'High', 25-27 29 40-42 3 as 'Medium High', 24 28 39 1 as 'Medium' 30 and 1 as 'Medium Low'. 38 The most used evaluation design was a cluster RCT using multiple arms to explore the effect of different combinations of activities, which six studies used exclusively. 24 28 29 40-42 Quasiexperimental designs were exclusively used in three studies, 30 38 39 and two studies relied on natural experiments for their evaluation. 26 27 The one multi-country study included in our analysis used a mixture of different methods, varying by country. 25 Seven studies used a mixed method approach, 25-27 30 38 39 42 usually supplementing the quantitative findings with additional qualitative analyses. For these same studies for which the extent of norm change programming was possible to measure, we classified them into 'Comprehensive', 'Limited', and 'Little/ None'. As described above, comprehensive programmes were characterised across six factors, including a commitment to working with individuals at multiple levels beyond the individual 'beneficiaries', found ways to challenge Open access power inequities and promoted critical reflection. Three studies were categorised as comprehensive, 24 30 38 four as limited 25 39-41 and five as little or no norms-change programming. 26-29 42 The studies were varied in their definitions of 'norms-related' outcomes and include a wide range of outcomes that could be related to CEFMU at multiple levels of the socioecological framework, including views of child marriage and girls' prospects not just among girls but among family members, community members or at the institutional level. Table 3 summarises the key findings for each of the groupings by extent of norm change programming. Overall impact on norms related to child marriage: 2 out of 12 programmes had positive and statistically significant effects; 2 had mixed effects; 6 had no statistically significant effect; and 2 did not report effects on norms. On child marriage or delaying marriage: 4 out of 12 programmes had positive and statistically significant effects; 3 had mixed effects; and 5 had no statistically significant effect. As noted above, the included studies all demonstrated greater evidence for the effect of programming on the CEFMU outcomes than for those related to social norms. However, there is little evidence of a systematic relationship between the intensity of the norm change approach used by the evaluated programmes and the associated effects on either social norms related to marriage or CEFMU behaviours. Regarding programmatic impact on norms related to child marriage, 2 out of the 12 programmes had positive and statistically significant effects; 2 had mixed effects; 5 had no statistically significant effects; and 2 did not report effects on norms. Regarding programmatic impact on child marriage or delaying marriage, 4 out of 12 programmes had positive and statistically significant effects; 3 had mixed effects; and 4 had no statistically significant effects. Three of the programmes were classified as having taken a 'Comprehensive' intentional approach to norm change programming. 24 30 38 Of these three, only one had any statistically significant effect on the measured norms, 30 though two had a positive and statistically significant effect on the CEFMU outcomes. 30 38 Four studies were categorised as having taken a 'Limited' approach to norm change programming. 25 39-41 Of these programmes, only one had a positive and statistically significant effect on normative outcomes 41 ; one had mixed effects 25 ; one had no statistically significant effects 40 ; and one did not report effects on norms. 39 With regard to measuring outcomes related to preventing or delaying child marriage, one of the four programmes had a mixed or positive effect 25 ; one had mixed effects 39 ; and two had no statistically significant effect. 40 41 Of the five programmes with 'Little/No' intentional norm change programming, 26-29 42 one had a positive effect on measured norms, 27 three had no statistically significant effect 26 29 42 Open access norms. 28 Among these five programmes, three had either a positive or mixed effect on CEFMU outcomes, [27][28][29] and two had no statistically significant effect. 26 42 Across all of the 12 studies in our analysis, several themes stood out. First, we observed considerable agreement on measures of CEFMU across the studies, with the majority focusing on the likelihood of participant girls having been married by the time they turned 18 26 28-30 41 or by the end of the study, which often roughly coincided with age 18 or earlier. 24-27 39 40 42 One study reported on whether participating girls agreed on age 18 or older being the appropriate age for marriage. 38 Second, there was generally stronger evidence of programme effect on the CEFMU outcomes than for social norms/attitudes. Of the 12 included studies, 4 found a positive and statistically significant effect on the CEFMU measure 25 28 30 38 and a further 3 found mixed effects, 27 29 39 compared with 2 positive 27 41 and 2 mixed 25 30 for the social norm outcomes. There was considerable variation across settings in which effects were found, even for programmes that shared common approaches. For example, the Marriage: No Child's Play programme showed positive effects in India and Mali but no statistically significant effect in Malawi and Niger. 25 Third, we identified common elements across the studies that showed mixed or positive impact in changing child marriage norms. The four studies that achieved this differed markedly in the scale and extent of their normative programming. Three of the four included an economic component as a key activity of their intervention. Melnikas et al aimed to offer a holistic community package of interventions implemented at multiple levels and across sectors. 25 This included a focus on enhancing access to economic and income-generating opportunities for girls and their families. Munthali et al provided girls with entrepreneurship training and also engaged girls in village saving and loans schemes. 30 And Sivasankaran focused solely on whether girls' and women's tenure in formal sector work influenced the timing of their marriages. 27 While there is insufficient evidence to conclude that economic components are critical to changing norms, their potential is worth further exploration. Two other studies described programmes that offered girls opportunities for savings and found either no statistically significant effects on norms or marriage 42 or effects on child marriage but not on norms-related outcomes. 38 Fourth, very few of these studies clearly attempt to identify an appropriate reference group among which to measure social norms. As noted above, the majority of the studies collect data only from programme participants themselves , with no Open access effort made to collect data from their peers or other influential members in the community. The exceptions to this are those studies that included measures for other family members, 24 38 40 though some studies measured changes broadly among participant and non-participant adolescents and therefore may offer some insight into broader patterns. There appears to be insufficient acknowledgement that girls themselves are not always the primary decision-makers for their own marriages. Most of the attitude/norm outcomes are about the girls' attitudes, which unfortunately may not have a direct connection to their ages at marriage. Finally, the results do not point to a definitive relationship between the success of programmes in changing social norms related to marriage and actually delaying CEFMU. Of the seven programmes that found either positive or mixed associations between programme activities and CEFMU outcomes, only three also found positive or mixed associations with change in social norms. 25 27 30 Four studies documented statistically significant change in measured CEFMU outcomes only 28 29 38 39 and one only for social norm outcomes. 41 This relationship might have been stronger if the Erulkar et al 39 and Amin et al 28 studies had measured and reported on social norm outcomes. --- DISCUSSION The results of our review suggest an inconsistent relationship between interventions that purport to shift norms and child marriage outcomes. Just over half of the studies showed any indication of having influenced child marriage outcomes and, among those, there was no clear relationship between the observed changes in child marriage and shifts in measured norms. Our findings echo prior research showing that norm change programming has had more success in shifting individual attitudes than in shifting broader norms and related behaviours. 43 However, given the broad consensus in the field around the importance of social norms as drivers of CEFMU, it is surprising that these studies provide only weak evidence on the impact of these programmes on norms, and on the link between shifts in norms and marriage behaviour. In particular, several studies found significant shifts in marriage-related behaviours with none of the appreciable changes in norms that would be consistent with the broader argument that these norms drive marriage behaviour. 25 27 30 38 Our analysis suggests several potential explanations for these findings. First, while all of the programmes included here invoked norms as important drivers of CEFMU and indicated their intention to address norms in their effort to delay marriage, they showed a surprising lack of consensus on which norms should be changed, which programmatic activities should be used, and which groups to focus on in programme activities. The field would do well to explore the impact of efforts to shift norms through structural interventions that go beyond social behavioural communications programming. Several of the studies that dropped out between the original 19 included and the final 12 studies of highest quality reflect labour market, education, legal systems, marriage and family systems, all of which reflect broader structures. Most of the programmes focused their efforts on adolescent girls themselves, paying comparatively little effort to activities aimed at shifting beliefs and attitudes of other reference groups important in girls' lives, nor measuring normative change among these other groups. Even when programmes worked at multiple levels, such as through engaging influential gatekeepers in the community, parents or siblings, programme activities aimed at groups other than girls themselves were often superficial, did not focus on influential reference groups, and were not explicitly linked to norms that had clearly been identified as important for child marriage. In other words, norm change in some cases appeared to almost be an afterthought rather than a key focus of the programme. Second, the intention to measure norm change did not always translate into measurement of norms-related outcomes and impacts. The overwhelming majority of these studies did not collect substantial data from anyone other than adolescent girls, typically those participating in the intervention. This is problematic given that an essential aspect of norms is that they are articulated and enforced by entire reference groups, not individuals. Yet only five interventions measured norms among people other than girls themselves. One of the programmes with the most comprehensive norm change approach, 24 for example, did not try to measure change in the broader community, but looked only at girls and caregivers. They referenced resource constraints that limited their measurement of impact to adolescent girls, the primary beneficiaries of the programme. They explicitly stated that they were not, therefore, 'able to measure direct impacts on men/boys or the wider set of community members reached by the programme'. 24 The varied ways these studies approach social norm change reflects a lack of a clear consensus about what 'social norms' are, how they can be defined and measured and what approaches to use in attempting to shift them. While this confusion is not unique to child marriage programming and research, 44 the presumed-and much referenced-centrality of social norms to child marriage makes this particularly problematic. From a measurement perspective, studies often relied primarily on attempting to measure individual attitudes rather than distinguishing more precisely between descriptive and injunctive norms, those that are perceived or held by the group, or those that have a direct or an indirect effect on child marriage. Furthermore, when the theories of change underpinning these programmes were described, they often approached norm change as a waystation that would lead to behavioural shifts but did not clearly articulate the necessary steps to complete this causal pathway. This 'black box' approach to norms makes it difficult to identify how specific programmatic approaches are expected to work together to change norms or the degree to which Open access their causal influence on child marriage is mediated by norms. This gap between the rhetoric of norm change, theories of change and programmatic activities and focus, along with the diversity of activities included, makes a full assessment of the impact of norm change programmes on child marriage challenging. These studies illustrate the challenges of effectively developing programming aimed at shifting child marriage behaviour through shifting norms. CEFMU is likely influenced by a range of structural and social factors, including norms with direct influence, such as those related to the control of sexuality, and those with a broader and potentially less direct influence, such as those related to education, future employment or indeed, gender norms writ large. The diversity of activities included by the programmes in our study exemplify this, ranging from life skills building to vocational skill building to activities intended to improve educational attainment. As others have pointed out, research and programming on norm change would greatly benefit from shared definitions and consistent terminology for the different types of norms and theories of change that precisely link activities to the specific norms they seek to address. 44 This will require greater efforts to understand the varied normative environments within which programmes are implemented, the specific norms influencing CEFMU, the relevant reference groups and the types of interventions likely to be most effective at bringing about change in the targeted norms. The one programme that was high quality in its measurement and engaged in comprehensive norms programming reported no significant effects . 24 On the other hand, the three programmes that showed impact across both normative and child marriage-related outcomes addressed norms to very differing degrees. These two findings taken together suggest that we have much to learn about the relationship between norms-related activities and measurement. Other factors beyond comprehensive norm-change programming led to programme success. We observe that the programmes that offered economic activity of some kind demonstrated impact across normative and child marriage-related outcomes. 25 27 30 Although the employment 'intervention' described by Sivasankaran was not comprehensive, the impact of formal employment on child marriage merits further exploration. 27 It is interesting, though perhaps not surprising, that these interventions with an economic component seem most effective in delaying marriage, even when they have no impact on norms. In light of the fact that child marriage is often seen as a 'logical' choice to relieve economic pressure on families, this result could be read as evidence that norms are not important and child marriage is just a practical choice. However, girls are much more likely to be married as children, and when under the same pressures, families do not marry the boys. It suggests that even 'practical' choices are shaped by norms. These normative and practical considerations relate to one another in nuanced ways: for example, relieving economic pressure may reduce child marriage risk for girls if families are primed to make that decision and see the girl as worth investing in. Such instances may show that the norm has already been 'softened', and what people need is the practical opportunity to make the choices they prefer. One limitation of this review is the substantial heterogeneity of included studies about their methodological quality, programme components and strategies, outcome measures and level of detail in the reviewed studies; taken together, these factors make comparison more challenging. A second limitation is that our review included experimental study designs because they provide stronger evidence on programme effectiveness. However, studies with other methodological designs and a more qualitative approach could offer more comprehensive insights in this specific research area. For example, studies often presented qualitative data on community-level norm change, 26 31 but these qualitative findings could not be integrated into our assessment of programme impact. Third, our searches may not have identified all relevant literature on social norms programming and child marriage outcomes globally, especially given the exclusion of non-English-language publications and the growth of interest in this topic in the most recent period. And fourth, despite the fact that this review adheres to established policies and guidelines associated with systematic reviews, a separate prespecified protocol was not published for the study as it builds on a preceding scoping review . 45 In addition, the timeline is prolonged by this nested approach, since the scoping review takes place and is followed in time by the systematic selection. In a fast-moving field, we may have missed some recent studies, but we hoped to address that by conducting internet searches on the most recent publications, some of which had not yet made it into the academic databases. We are aware of at least one relevant study published since we conducted our analysis. 46 --- Conclusion Social and gender norms are central to child marriage, and awareness is growing of the potential impact on child marriage of programmes that attempt to shift norms. Yet research and programming on norm change needs shared definitions, terminology and theories of change linking activities to specific norms and reference groups. When norm change is part of a programme's theory of change, then programme activities and their impact on both norms and the outcomes those norms are meant to influence need to be measured. As others have written, norms research is a field in its own adolescence, 44 making it all the more important to learn from successes in other areas including adolescent sexual and reproductive health and the prevention of female genital mutilation. While momentum has built around better conceptualisation and measurement of norms in preventing child marriage over the past 10 years, greater consensus is Open access needed on prioritised norms and pathways of change. Ultimately, consensus on how to approach normative change in child marriage programming requires building agreement across a range of stakeholders. 47 Conducting a Delphi study to gain greater clarity could contribute to building agreement on a set of domains, questions and a shared framework for measurement. 48 Our findings underscore, for example, the need to test economic interventions as one element of social norms programming. To realise the potential of normative programming in ending child marriage, we call on the field to hold itself accountable to greater conceptual clarity, consistent implementation and more complete and rigorous measurement of norms-change work. When norm change is a programme goal and part of its theory of change, then programme activities need to reflect this and programme impact on both norms and the outcomes those norms are meant to influence needs to be measured. What we found, therefore, was not that norms programming will not work, but rather that almost no one is doing it well. In order to evaluate the range of interventions working to shift norms related to child marriage, the child marriage field needs validated instruments for quantitatively and qualitatively measuring change in social norms. The recent Tipping Point study in Bangladesh moves decisively in the right direction on this, 49 50 pairing a cluster RCT with qualitative data. Identifying relevant reference groups for girls at risk of child marriage and naming the power-holders and decision-makers in their lives is also essential. Data must be collected from the correct people in girls' lives and should go beyond attitudes to collect data on norms and behaviours, for example, asking about the perceived benefits of delaying child marriage. The potential sustainability of norm change programming, as witnessed by work in related areas such as female genital mutilation and education, has also contributed to interest in harnessing this approach to address child marriage. 6 If norms are 'upstream' from outcomes, investing in norm change programming should theoretically be able to dislodge support for multiple practices that limit health and well-being. This is aligned with the call from the Sustainable Development Goals to invest in activities that promote synergies. Taken together, these factors make programmes to promote norm change very promising and potentially valuable areas in which to invest. Twitter Margaret E. Greene @Greene_Works and Jeffrey Edmeades @JeffEdm --- Open access Lead author and year Evaluation social norm measure --- Result Evaluation CM outcome measure --- Result Melnikas et al, 2021 25 India. Agree boys have right to refuse arranged marriage . Agree girls have right to refuse arranged marriage . Mixed. All states: -2 PP; Bihar: 7 PP; Jharkhand: 6 PP; Odisha: -12 PP; Rajasthan: -10 PP. All states: Not presented; Bihar: -26 PP*; Jharkhand: 3 PP; Odisha: -8 PP; Rajasthan: 4 PP. --- India. Currently married . Positive. Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information. Supplemental material This content has been supplied by the author. It has not been vetted by BMJ Publishing Group Limited and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations , and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Objectives Harmful gender and social norms prescribe divergent opportunities for girls and boys and drive child marriage. This systematic review examines the scope, range and effectiveness of interventions to change social norms and delay child marriage. Design We systematically assess the contributions made by interventions that work to shift norms to prevent child marriage or to limit its harmful consequences. Our analysis classifies each study's quality in evaluation and implementation design regarding shifting norms. Data sources We conducted a search of electronic databases (PubMed, PsycINFO, Embase, CINAHL Plus, Popline, Web of Science and Cochrane Library) and grey literature (targeted hand-searches of 15 key organisations and Google Scholar). Eligibility criteria Included interventions sought to change norms related to child marriage, were evaluated in experimental or quasi-experimental evaluations, collected data on age at marriage and norms/attitudes, and were published in English from January 2000 to September 2021. Data extraction and synthesis We used a standardised form to extract data from all eligible studies, and doublescreened to validate coding and reporting. We classified the studies by low, medium and high quality for evaluation and risk of bias, and separately by the extent to which they addressed social norms. Results Our assessment of the 12 eligible studies identified revealed little evidence of a systematic relationship between social norms related to marriage and changes in child marriage behaviours. We found stronger evidence of programme effect on child marriage outcomes than on social norms, though only a minority of studies found an effect for either. Studies that appeared effective in changing child marriage norms varied greatly in scale and extent of programming, and few attempted to identify the appropriate reference groups for measuring social norms.The studies evaluated by our review provide only weak evidence on the impact of interventions on norms, and on the link between shifts in norms and marriage behaviour.
Introduction Health information-seeking behaviour has been identified as a key component of patient behaviour which assists in the psychosocial adjustment to illness [1]. Patients seek health information to manage their ongoing health, as well as to manage chronic disease [2]. Previous health informationseeking behaviour studies have focused on why patients engage in this behaviour, typically as a coping mechanism [1], from where they get their health information [3] and the kind of information they find [2,4]. These studies often presume a level of patient health literacy sufficient for the patient to successfully navigate a range of activities, from taking medications safely to discussing their health concerns with health professionals [5]. Further, an individual's personal social network has been recognized as an important source of health information [3,6] providing them with the support needed to manage a chronic illness, such as arthritis. Current chronic disease management models assume that the patient has a level of health literacy sufficient to understand the factors that may aggravate symptoms and the types of treatments that are effective in managing symptoms [7]. It is notable also that patients with low health literacy have been shown to have little understanding of the medications they are taking [8], which exacerbates the risk of poor health outcomes related to incorrect use of medications [9]. Further, an individual's health literacy may affect that individual's ability and desire to seek out health information [4,10], and individuals with low health literacy are less likely to engage with written medicine information [10]. There are very few studies which have examined the impact of health literacy on arthritis medication management [5], so this is an appropriate area of research designed to better inform arthritis-focused patient management interventions. Patients who engage in health information-seeking behaviour tend to use sources of information that require active engagement, such as exposure to mass media , the internet, and web-based social networks [3]. Most health information-seeking behaviour literature examines this behaviour generally and not specifically in arthritis patients. Informal social networks made up of family or friends [11] also provide a range of support, including informational support [12]. The sharing of health information through interpersonal communication has been found to be more prevalent among those with more complex informal social networks [13]. The aim of this paper is to undertake a pilot study using a constructivist paradigm [14] to investigate how arthritis patients' health literacy affected their engagement in arthritis-focused health information-seeking behaviour and how it directed their choice of health information source, importantly including their informal social network. --- Method The pilot study was exploratory, with a qualitative design using one-on-one semistructured interviews. A purposive selection strategy was utilized, targeting a convenience sample of community-dwelling adults taking medication prescribed by a health professional for the management of their arthritis and arthritic pain. Culturally and linguistically diverse participants who could speak and understand English were also included in the study to maximize variance in health literacy and the range of experiences in managing chronic disease. Community groups in the Illawarra Region , Australia), which catered specifically to people with arthritis or to the elderly taking medication for arthritis, were targeted as part of the recruitment strategy. The community groups involved were the Illawarra Branch of the NSW Arthritis Foundation, the University of Wollongong Graduate School of Medicine patient volunteers program, and the South-Eastern Sydney and Illawarra Area Health Service Multicultural Health Outpatient Services. Following ethics approval by the University of Wollongong's Human Research Ethics Committee, the primary author attended a scheduled meeting for each of the community groups described above , where she informed approximately 80 group members, in total, about the study at large and provided flyers with contact details for interested participants. These interested participants, then contacted the researcher who provided them with a participant information sheet and a participant consent form which was signed prior to the interview taking place at their usual community group meeting premises. Data Collection. The participants completed a brief questionnaire and an audit of their current medications to ensure that they were taking medications for arthritis pain. A one-on-one semistructured 30-60-minute interview then ensued which was recorded on digital audio equipment to allow for verbatim analysis of the discussion. The semistructured interview questions were designed to elicit the participants' experience in managing their arthritis medications and to describe which of these medications they felt were most effective in controlling their pain. The interview questions also asked the participants to comment on whether or not they discussed their pain medications with others, what they understood about their arthritis, and where they found information about their arthritis and arthritis pain medication. Data Analysis. Digital audio recordings were transcribed and thematic analysis was conducted by the primary author utilising qualitative analysis software . Major themes and subthemes were inductively drawn from participant responses. The coding of participant responses was checked by an independent researcher. For the purposes of this paper, participant responses were categorized into a de facto measure of health literacy and to describe the extent and makeup of the participants' social networks, as described in the next two sections. --- Measurement of Health Literacy. Health literacy was estimated and classified using a method adapted from the Field et al. [8] qualitative study into heart failure patients' understanding of medication . The health literacy analysis of the verbatim transcripts of the one-on-one semistructured interviews was independently conducted by two qualitative researchers who achieved a high level of inter-rater reliability . The classification of health literacy was based upon the participants' understanding of arthritis and its management, as well as emergent themes from the verbatim transcripts. Each respondent was assigned to one of these three levels by the primary author and another independent qualitative researcher . The level of engagement in arthritis-focused health information-seeking behaviour at each health literacy level was also examined from the verbatim interview transcripts. --- Measurement of Social Network. Analysis of the transcribed interviews was used to determine the extent of participants' social networks, with participant responses categorized into three main themes-who was providing the support , the kind of support , and the perceived quality of the support . Sources of informational support were examined and linked to each health literacy category to provide a de facto measure of health information sources. Table 1: Classification of health literacy . --- Health literacy level Classification criteria Level 1-low health literacy Little or no understanding of health information, use of nontechnical language, corresponding to Field et al.'s "Doing what I'm told"-for instance, participants who did not fully understand their arthritis or their arthritis pain medication and were not interested in further treatment details. Level 2-intermediate health literacy Some understanding of health information, use of a mix of technical and nontechnical language, corresponding to Field et al.'s "Leaving it up to your GP"-for instance, participants who described good relations with their GP and maintained that they received enough information about their arthritis and arthritis pain medication for their needs. --- Level 3-high health literacy Good to excellent understanding of health information, use of appropriate technical language, corresponding to Field et al.'s "Candidates for concordance"-for instance, participants who had a good to excellent level of understanding about their arthritis and arthritis pain medication and often sourced more information about their condition and its treatment. --- Results and Discussion 3.1. Results. Twenty-one participants volunteered to take part in the study; however one participant was excluded for not fitting the selection criteria as she did not have arthritis. Figure 1 indicates that more females than males participated in the study, and Table 2 shows that the majority of the participants were aged 75 years of age or less and from a CALD background, with equal numbers of participants being educated up to and below year 10, and above year 10. The CALD background participants were post-World War II emigrants from Central Europe , the Mediterranean , and a Balkan state . 10 and below were more likely to be represented in Level 1 health literacy and included many of the CALD participants. Participants who were estimated as Level 2 health literacy appeared to understand the role pain medications played in the broader management of their arthritic pain and demonstrated some engagement in arthritis-focused health information-seeking behaviour. This was demonstrated by participants' awareness of health information, such as consumer medicine information, as evidenced by a comment such as I'm sort of fairly strict on that, you know,... and I'll sort of double check if it's not always on the packet when I come from the chemist...because sometimes you have to take it with food, sometimes you don't, you have to take it half an hour before food . --- Estimated Health Literacy Classifications. As shown in --- Estimated Health Literacy and Level of Engagement in Health Information Participants estimated as Level 3 health literacy demonstrated more engagement with arthritis-focused health information-seeking behaviour. This was reflected by the higher level of technical detail used in these participants' conversations, such as this participant's response, in answer to a question about his understanding of his cervical spondylosis Oh, it's the narrowing of the channel in the top of my spine, which I understand to be caused by arthritis, osteoarthritis, the buildup of calcium in the bone, gradually narrowing the channel which affects my nervous system and is affecting my nervous system . --- Sources of Health Information through Social Networks. The quality and complexity of information sources the participants engaged with appeared to be influenced by the participants' estimated level of health literacy. Furthermore, there appeared to be some differences in regard to the flow of health information within informal social networks. While participants estimated as low health literacy did not report seeking health information outside their formal doctor/patient relationship, those estimated as intermediate health literacy were more likely to engage in general health information broadcast through television, radio, or newspapers. Some participants with estimated intermediate health literacy also reported that they looked for health information regarding their arthritis and its management on the internet, either through a general search or to follow-up programs they had heard on the radio or television. Those estimated as high health literacy were more likely to engage in specialized medical information, such as medical journals, health support organization reports, or their own medical records , as well as consumer medicine information, as evidenced by this comment --- I belong to the Arthritis Foundation and I get their quarterly magazines . When it came to exchanging arthritis-focused health information within their informal social networks, the flow of information discussed by study participants also appeared to be linked to their estimated level of health literacy. Some participants from CALD backgrounds reported instances of seeking information about pain medications and supplements from friends in their informal social networks, such as --- I take this one because some of my friends say they good . Those who discussed being recipients of advice from members of their informal social network tended to be in the estimated low and intermediate health literacy categories, whereas participants who stated that they gave advice to others in their informal social network tended to be in the estimated high health literacy category. --- Discussion . The results of this study suggest that an individual's level of engagement with arthritis-focused health information-seeking behaviour is mediated by their level of health literacy and that they have access to a range of sources of information available through their informal social networks. Those respondents who had been estimated to have high health literacy demonstrated the most engagement in arthritis-focused health information-seeking behaviour and appeared to inquire more about their arthritis and their pain medications. They reported seeking information from a variety of sources, mainly specialist medical texts , health support organizations, and specialist sites on the internet. Patients such as these meet the criteria for the concordance effect that is the desired outcome of health literacy initiatives [8,[15][16][17], displaying characteristics of the informed activated patient exemplar of the Chronic Care Model [7]. One study examining the health attitudes, cognitions, and behaviours of individuals seeking health information found that healthoriented individuals utilized active communication channels, which required participant involvement in the critical analysis of health information [3]. Certainly, participants in this study with estimated high health literacy reported levels of personal and/or professional interest in health. This would imply that their enhanced ability to acquire and comprehend sources of quality arthritis health information facilitated their management of medications for their arthritis. In contrast, participants demonstrating estimated low health literacy described little or no engagement in arthritisfocused health information-seeking behaviour, accepting only the information received directly from a health professional. These participants did not describe seeking information from any other source and appeared to have fewer sources of informational support through their social network. A study into health information-seeking behaviour among social isolates found that those with limited social networks were less likely to seek information about their health [18]. However, it may be that those with low health literacy in this study do not have the capacity to seek information from other sources due to lower levels of education and, in the case of this study's sample, language differences, both barriers to health information sharing which have been identified in CALD populations in the United States [13]. The complexity of language in written and spoken arthritis health information [5], over and above the barriers of everyday written and spoken language, may also explain some of these results. This study also suggests that the direction of the flow of health information within an individual's informal social network is affected by that individual's health literacy. Participants in this study who exhibited estimated high health literacy reported giving advice to others in their social network. Participants estimated as demonstrating intermediate and low health literacy reported receiving advice from those in their social network who had more understanding of health issues. This is consistent with another Australian study [19] which also found that those with high health literacy considered themselves sources of health information for those in their social network and that those with low health literacy looked to others in their informal social network for health information. More research into the flow of health information within social networks is necessary to confirm these exploratory findings. Given that the majority of participants in this study who demonstrated estimated low health literacy were from CALD backgrounds, these results provide some insight into how patients with CALD backgrounds engage with health information. The value to patients with low health literacy of having access to those more confident in negotiating the health system is well established [13,18,20]. As there is limited formal evidence about this form of social support in CALD populations in Australia [21], more research into the characteristics of CALD populations' health information seeking behaviour is required. The results of this study offer an insight into how the level of health literacy of patients managing their arthritis pain medication can influence their choice of health information sources. The study is important because if patients with a chronic illness, such as arthritis, are relying upon inadequate or inappropriate sources of health information, for example, from their informal social networks, they may be inadvertently misunderstanding their therapeutic regimen, which could then further exacerbate poor health outcomes and adverse drug events [9]. Health professionals need to be mindful that patients with a chronic illness, especially those with low health literacy, may not come to them to seek clarification about the health information given to them [19]. They also need to be especially mindful when treating patients from CALD backgrounds that they may not ask important questions about their care and/or therapeutic regimen because of their language limitations. In addition, based on the findings from this study, health professionals need to be aware that patients with high levels of health literacy are likely to seek further health information from other sources, which they are likely to share. Health professionals should therefore direct these patients to suitable and reputable health information resources to ensure that they are accessing and sharing good-quality health information. Limitations. The small number of participants in our exploratory qualitative study limits the generalisations we can make about the role of health literacy and social networks in arthritis patients' health information seeking behaviour and prevents us from making any causal attributions. The very small proportion of males in the study precludes us from making any specific observations about similarities or differences between genders, and our study design assumed participants had some competent levels in speaking English. Further, the limited geographic spread of participants meant that many participants used the same health professionals or belonged to the same community-based organizations. This may account for a social or cultural bias towards some behaviours or attitudes which might have influenced participant responses. The results of this study suggest that the research is worthy of further inquiry and needs to be validated in a larger study. --- Conclusion Based on the results of this exploratory study, it appears that patients living with arthritis who have limited literacy skills and limited knowledge about their arthritis/arthritis pain medications perceive themselves to be managing their disease and medications according to what the doctor has prescribed. Importantly, however, it seems as though they are more likely to ask questions about their chronic condition and its treatment from their informal social networks rather than from their doctors. Further, this study highlights that patients with high literacy skills are more inclined to engage in arthritis-focused health information seeking behaviour beyond that provided by their doctors. These patients are also more likely to share this information and their knowledge among their informal social networks. Overall, therefore, even though much larger studies are required to confirm these findings, all health professionals should endeavour to encourage their patients with limited literacy skills, in particular those from CALD backgrounds, to ask questions to ensure that they are managing their chronic disease and pain medication safely and effectively. Health professionals should also ensure that their well-educated patients with high health literacy are sourcing information about their chronic disease and its management from good-quality, reputable sources, especially since it appears that many of them are likely to share their knowledge and understanding among others in their informal social networks. The results of this study have been presented in a poster presentation at the Australian Disease Management Association's 7th Annual National Conference, Canberra, August 2011, and the 10th National Emerging Researchers in Ageing Conference, Sydney, November 2011.
Background. Patients engage in health information-seeking behaviour to maintain their wellbeing and to manage chronic diseases such as arthritis. Health literacy allows patients to understand available treatments and to critically appraise information they obtain from a wide range of sources. Aims. To explore how arthritis patients' health literacy affects engagement in arthritis-focused health information-seeking behaviour and the selection of sources of health information available through their informal social network. Methods. An exploratory, qualitative study consisting of one-on-one semi-structured interviews. Twenty participants with arthritis were recruited from community organizations. The interviews were designed to elicit participants' understanding about their arthritis and arthritis medication and to determine how the participants' health literacy informed selection of where they found information about their arthritis and pain medication. Results. Participants with low health literacy were less likely to be engaged with health information-seeking behaviour. Participants with intermediate health literacy were more likely to source arthritis-focused health information from newspapers, television, and within their informal social network. Those with high health literacy sourced information from the internet and specialist health sources and were providers of information within their informal social network. Conclusion. Health professionals need to be aware that levels of engagement in health informationseeking behaviour and sources of arthritis-focused health information may be related to their patients' health literacy.
Introduction 1.Background China has numerous mountainous regions, and many villages are scattered throughout hilly and mountainous areas. The geographical terrain hinders transportation connections and cultural diffusion, resulting in remote mountainous areas being less influenced by modernization and urbanization. This, in turn, has preserved the traditional customs, settlement patterns, and architectural styles of these villages. In recent years, there has been a reflection on modernization and a revival of traditional culture, leading to an increased appreciation of the value of traditional villages. As a result, traditional villages have become hotspots for tourism development. However, for remote mountainous areas, tourism development faces multiple challenges. Firstly, inconvenient transportation in mountainous regions limits tourist visits. Secondly, due to the generally low economic levels in mountain villages, the existing resources and facilities are often insufficient to meet the needs of external tourists. Additionally, mountainous villages tend to be small in scale, with scattered sight spots, which makes management and promotion difficult and hinders the formation of clustered and large-scale tourism routes. These factors result in high initial investments and slow returns in mountainous tourism development, and a sustained investment is required to truly promote local economic development. In contemporary China, most tourism village development projects are primarily led by the government and other external funds. The government-led governance model and profit-driven mentality of the capital have led to a lack of sustainability in tourism village development. The ultimate cause of this phenomenon lies in the neglect of endogenous strength within rural areas. In other words, the villagers, who are the main actors in rural areas, are excluded from development decision making, making it difficult to harness some localized resources. Therefore, encouraging endogenous strength at the local level and achieving a transition from external intervention to internal spontaneous development are key to achieving sustainable development for mountainous tourism villages. --- Research Question Villagers are the primary source of local endogenous strength. The top-down development model in China inherently lacks attention to the opinions of grassroots villagers. Even when villagers are provided with the opportunity to express their views, most are constrained by their educational levels, resulting in fragmented and vague opinions. Relevant authorities find it challenging to incorporate these views, let alone influence the development process. Consequently, villagers perceive their opinions as unimportant, leading to a decrease in their attempts to express their views. Developers and the government, in turn, assume that villagers are disinterested in the village's development and are unlikely to provide constructive input. This detrimental cycle results in the gradual erosion of the village's intrinsic developmental drive. This issue becomes even more pronounced in the context of traditional tourism village development. Therefore, the research question of this study is how to establish an effective means of communication between villagers and professional tourism developers, specifically by developing a village-centric evaluation method for tourism village development. --- Literature Review 2.1. Tourism as a Catalyst for Sustainable Development in Traditional Villages Since 2012, China's Ministry of Housing and Urban-Rural Development, Ministry of Culture, State Administration of Cultural Heritage, and Ministry of Finance have jointly initiated investigations and established a protection list for China's traditional villages. Important criteria for determining whether a village is traditional include the integrity and antiquity of existing traditional architectural styles and layouts, as well as the preservation of traditional characteristics in the village's location and structure, which may also include the active inheritance of intangible cultural heritage. To those traditional villages in China, tourism development has become a pivotal method for augmenting residents' income and alleviating poverty. Cultural heritage, as a critical traditional resource, when examined and utilized across various dimensions, can render cultural heritage-oriented tourism development more sustainable and positively impact overall village development [1,2]. In recent years, the Chinese government has demonstrated a consistent dedication to rural areas, and tourism-based poverty alleviation has emerged as a significant measure to combat poverty in China [3]. Numerous Chinese scholars have engaged in discussions regarding various aspects of tourism development in specific traditional villages in China, encompassing the relationship between rural revitalization and rural tourism [4], public policies for traditional village tourism development, comprehensive development frameworks [5,6], the conservation and planning of traditional villages [7,8], and transformations in village spaces and residents' living environments [9,10]. Analogous to many rural areas in third-world countries, and echoing the situation in numerous traditional villages in China, scholars have analyzed the undue challenges and obstacles these nations and regions face in rural tourism development. These challenges encompass institutional and policy irrationalities in rural tourism development, unfavorable operational management, deficiencies in professional knowledge and expertise, insufficient tourism development budgets, and the residents' limited understanding of tourism development. Scholars have proposed constructive recommendations to over-come these challenges, ultimately driving tourism development and promoting sustainable development in rural areas [11][12][13][14][15]. The European Union has also shown significant concern for the preservation of impoverished rural regions, known as "lagging rural regions ", and has extended policy support to bolster the development of these villages [16]. Some scholars have noted that EU LRRs generally possess rich socio-cultural resources, which can be effectively integrated into tourism development, thus invigorating the region's progress [17]. However, scholars have also emphasized that unregulated tourism development can lead to cultural degradation, ultimately undermining the potential for sustaining the tourism industry [18]. Simultaneously, tourism development may negatively impact local social cohesion [19]. Therefore, scholars underscore that, in traditional rural development, the endogenous strength of the local community plays a pivotal role in achieving regional sustainable development [20]. --- Rural Community and Tourism Development Tourism development can significantly contribute to the economic growth of rural areas. Economic-oriented tourism development has been positively correlated with residents' satisfaction. Nevertheless, some scholars have raised concerns, suggesting that a sole focus on economic capital may lead to unsustainable development and foster antagonistic sentiments among community residents. It is imperative to emphasize the utilization of social capital, fully exploring and harnessing the inherent potential within the community [21,22]. The concept of community-based tourism underscores the importance of involving and empowering community residents in the development process [23,24]. Numerous case studies from various countries underscore the pivotal role of community support in the successful execution of rural tourism development [25,26]. Community participation not only fosters tourism development but also enables a people-centered approach to diverse, sustainable development [27]. This has been well demonstrated in Japan and Taiwan, where the fundamental principle of "villager-led" movements has been introduced to transform traditional villages into modern communities, effectively highlighting the substantial role played by the community's endogenous strengths in advancing the sustainable development of these traditional villages [28,29]. To facilitate resident engagement in tourism development, a fundamental understanding of tourism's impact on the local residents is necessary. Discussions surrounding the effects of tourism on community residents date back to the 1960s. With evolving research, the assessment of its impact has transitioned from unidimensional evaluations to more multifaceted and individual-focused analyses [30,31]. For instance, scholars have delved into residents' attitudes towards tourism development and external tourists, providing insights from behavioral and emotional perspectives regarding the relationship between local residents and tourism development [31,32]. Additionally, some researchers have compiled comprehensive summaries of the challenges and restrictions faced by communities in developing countries who engage in the tourism industry, proposing constructive solutions and recommendations [33][34][35]. Furthermore, certain scholars have analyzed the involvement of community strengths, exploring how residents' participation in tourism development could be enhanced by comparing policies [23] and tourism management models [12]. Finally, a particular group of scholars has taken a rights-based approach to scrutinize the impact of tourism development on the local villagers through in-depth individual interviews, demonstrating that the benefits brought about by tourism development are not evenly distributed among all participants. This highlights the current issues within the community participation model of tourism development [36]. Decision-making processes and benefit allocation are the cornerstones of communitybased development [37]. However, in the context of China, most development projects are primarily driven by government initiatives and external capital. Professional companies typically manage tourism projects. The imbalance between urban and rural development often results in villagers seeking employment opportunities in cities rather than remaining within their villages. Consequently, community residents exhibit lower levels of participation in the decision-making processes and benefit allocation associated with practical tourism development. --- Village Satisfaction and Tourism Village Development Evaluation Method Numerous studies have attempted to understand residents' opinions on tourism development from various angles, but the assessment of residents' satisfaction with tourism development is often approached from the perspective of the overall tourism industry development [38][39][40]. However, most research stops at understanding "public opinion" or merely provides assessments and improvement suggestions for selected subjects [41][42][43]. There is a lack of systematic and widely applicable methods to incorporate these opinions into a feedback system and make specific adjustments to development plans. In China, traditional villages have been long recognized by architectural, archeological, and artistical specialists as cultural relics [7]. Therefore, the physical environment construction is the main component of rural tourism development. Especially in recent years, with the political policy of tourism poverty alleviation implemented in China, tourism development resembles village-built environment development [6]. Thus, the existing literature on traditional Chinese rural tourism development has largely focused on the field of architecture. For the evaluation of the built environment, the field of architecture utilizes well-established theoretical frameworks, like architectural programming and postoccupancy evaluation theory [44,45]. This theory offers a rational and scientific method for assessing buildings, optimizing decision-making processes in construction. Moreover, it has been further applied to the evaluation of rural development. For example, Dang proposed a framework for rural architectural programming and post-occupancy evaluation, delving into the social principles underpinning rural architectural programming and discussing specific operational methods [46,47]. After many years of development, the evaluation method of rural construction in China has gradually become more in-depth and refined [48]. Feedback and influence on rural development via evaluation indicators play a critical role in minimizing decision-making errors during the project execution. However, the existing evaluation indicators for rural development often overlook the importance of considering residents' opinions, neglecting the significant role of community participation in sustainable rural development. Therefore, there is an urgent need to establish an evaluation method that reflects the residents' opinions to promote their involvement in the development of tourism-oriented rural areas. --- Materials and Methods --- Determination of the Experimental Subjects --- The Jiufeng Mountain Area in Northeastern Fujian, China This article selected the Jiufeng Mountain area as the research area, primarily for the following two reasons: --- • The region is a typical remote mountainous area with rolling terrain, dense forests, and an extensive network of waterways. Due to the transportation difficulties, it has experienced the impact of rapid urbanization in China to a lesser degree and in a more delayed manner. Like most mountainous areas, the traditional settlement patterns and architecture were preserved and have become valuable tourism resources in the present. tourist destinations for surrounding cities, thanks to their beautiful natural land scapes and well-preserved traditional features. Pingnan County, as an epitome o tourism village development in Fujian Province, was successfully transformed from an impoverished county into a trending tourist destination. By 2022, Pingnan County, as a small county with a permanent population of only 139,000 inhabitants built 16 "Gold Medal Tourist Villages" and welcomed 4.9 million tourists and earned CNY 4.05 billion ) of tourism income in that year. --- Beiqian Village and Longtan Village This article selected Beiqian Village and Longtan Village in Pingnan County as the primary observation and sampling areas , primarily based on the following two reasons:  Both villages are the most well-known tourist destinations in the surrounding area, and they have similar transportation conditions, permanent populations, and levels --- Beiqian Village and Longtan Village This article selected Beiqian Village and Longtan Village in Pingnan County as the primary observation and sampling areas , primarily based on the following two reasons: --- • Both villages are the most well-known tourist destinations in the surrounding area, and they have similar transportation conditions, permanent populations, and levels of tourism development. Although there is a significant difference in the overall area, the core tourism areas are roughly equivalent. Additionally, the original villagers still live in the villages and maintain traditional lifestyles in spite of tourism developing. --- • The two villages have distinct differences in their tourism development paths. By selecting these two villages as the research subjects, it is possible to first conduct a correlational analysis between the evaluation results and the actual situations in each village. Secondly, a comparative analysis of the evaluation results from the two villages can be performed to effectively test the feasibility of the evaluation method. Beiqian Village was designated as a Chinese traditional village in 2014 by the Ministry of Housing and Urban-Rural Development of China and began its tourism development slightly earlier than Longtan village. Since 2016, it has focused on the "Yellow Wine + Cultural Tourism" industry, investing CNY 12 million in infrastructure and supporting facilities with government support. By regularly hosting large-scale events, like the Yellow Wine Culture Festival, and collaborating with universities to develop cultural and creative products, it has continuously increased its tourist attraction. In 2022, it received nearly 40,000 visitors, generating a comprehensive tourism income exceeding CNY 6 million. Meanwhile, the annual production of yellow wine is over 1000 tons, with an annual output value of over CNY 30 million. However, the homestay industry is limited, with only eight homestays and 105 beds. The reconstruction of the built environment in Beiqian Village mainly focuses on the improvement of sanitation and landscape, and the reconstruction of the existing tradi- Meanwhile, the annual production of yellow wine is over 1000 tons, with an annual output value of over CNY 30 million. However, the homestay industry is limited, with only eight homestays and 105 beds. The reconstruction of the built environment in Beiqian Village mainly focuses on the improvement of sanitation and landscape, and the reconstruction of the existing traditional buildings in the village is minor, striving to minimize interferences with the traditional layout and style. Figure 5 compares the situation before and after the reconstruction of the local outdoor space in Beiqian Village. Longtan Village's tourism development began in 2017, focusing on the cultural and creative industry. It introduced contemporary artistic creativity into the ancient village by means of "government guidance + artist leadership + villagers' participation", thereby enhancing the quality of the living environment in the ancient village. Simultaneously, the innovation of the old house renting policy allows the "new villagers" to lease entire old traditional houses for a very low rent. And, in return, the "new villagers" should invest in the renovations of these houses, and these renovations should be focused on maintaining Longtan Village's tourism development began in 2017, focusing on the cultural and creative industry. It introduced contemporary artistic creativity into the ancient village by means of "government guidance + artist leadership + villagers' participation", thereby enhancing the quality of the living environment in the ancient village. Simultaneously, the innovation of the old house renting policy allows the "new villagers" to lease entire old traditional houses for a very low rent. And, in return, the "new villagers" should invest in the renovations of these houses, and these renovations should be focused on maintaining the traditional architectural esthetics. This approach allowed the transformation of these houses into various formats, such as homestays, exhibition halls, and bars. In 2022, Longtan Village received over 300,000 visitors, generating a tourism income of over CNY 13 million and attracting more than 400 "new villagers" and returning job-seekers. Over 20 homestays and more than 30 art-sharing spaces were established. It is commendable that the ecological environment of Longtan Village has been continuously improved rather than destroyed while accepting increasing tourist numbers, and traditional buildings have been better protected due to the increase in economic income. Many renovation activities of the traditional buildings were conducted during the tourism development stage, and some new landscape buildings were built, but most of them follow the traditional style. Figure 6 compares the situation of the river landscape in Longtan Village before and after reconstruction. In comparison, the primary stakeholders in the tourism development of Beiqian Village are the government and the village collective, and their improvement and renovation projects are more holistic. On the other hand, the tourism development in Longtan Village is more decentralized, with multiple stakeholders involved, resulting in a unique and successful tourism village development case. --- Existing Evaluation System of Villager Satisfaction for Rural Human Settlements Based on Shi's review of the rural human settlement evaluation research conducted over the past two decades and the evaluation systems from 60 main studies [12], the common evaluation indicators were obtained , including a total of 8 primary indicators and 26 secondary indicators. Among the secondary indicators, 16 indicators were related to physical factors, including living conditions , infrastructure , and ecological environment , while 10 indicators were related to social factors, encompassing public services , quality of life , social circumstance , social culture , and economic development . --- Primary Indicators Secondary Indicators Living conditions Per capita house area Day light and house orientation Quality of construction In comparison, the primary stakeholders in the tourism development of Beiqian Village are the government and the village collective, and their improvement and renovation projects are more holistic. On the other hand, the tourism development in Longtan Village is more decentralized, with multiple stakeholders involved, resulting in a unique and successful tourism village development case. --- Existing Evaluation System of Villager Satisfaction for Rural Human Settlements Based on Shi's review of the rural human settlement evaluation research conducted over the past two decades and the evaluation systems from 60 main studies [12], the common evaluation indicators were obtained , including a total of 8 primary indicators and 26 secondary indicators. Among the secondary indicators, 16 indicators were related to physical factors, including living conditions , infrastructure , and ecological environment , while 10 indicators were related to social factors, encompassing public services , quality of life , social circumstance , social culture , and economic development . Nevertheless, according to the existing rural architectural programming and postoccupancy evaluation theory, "society", as one of the six primary elements of its programming method system, contains four secondary elements, "public participation, public opinion and social satisfaction", "social demonstration", "community relations and social progress", and "social equity" [18]. If the assessment of the pertinent factors in the evaluation system is not detailed enough, the limited information cannot be effectively utilized for the future programming or to establish a closed loop from architectural programming to post-occupancy evaluation". Therefore, it is necessary to further refine the indicators associated with social factors and advance the sophistication of the evaluation system from a theoretical perspective. --- Exploration of Evaluation Indicators Based on Field Surveys In order to better understand the situation of tourism village development and villagers' feedback, field surveys were conducted several times from 2022 to 2023 in the Jiufeng Mountain area of Fujian Province. A total of 12 tourist traditional villages were visited , and semi-structured interviews were conducted with 19 representatives, including indigenous villagers, new villagers, village officials , local experts, and government personnel, to understand their views on the development of rural tourism . The survey found that, in addition to the construction of the physical environment, non-physical factors, such as the improvement of income, interpersonal harmony, government and village committee diligence, and the congeniality of the business environment, play pivotal roles in determining villager satisfaction levels with tourism village development. Based on the field surveys and literature review, the evaluation indicator system was formulated, as illustrated in Table 4, including 5 primary indicators and 19 secondary indicators. Among the secondary indicators, there are 1 measure for overall satisfaction with tourism development, 6 indicators for satisfaction with physical environment construction, 4 indicators for satisfaction with rural scene, 3 indicators for satisfaction with economic and social development, and 5 indicators for satisfaction with social relations. Per capita disposable income Nevertheless, according to the existing rural architectural programming and postoccupancy evaluation theory, "society", as one of the six primary elements of its programming method system, contains four secondary elements, "public participation, public opinion and social satisfaction", "social demonstration", "community relations and social progress", and "social equity" [18]. If the assessment of the pertinent factors in the evaluation system is not detailed enough, the limited information cannot be effectively utilized for the future programming or to establish a closed loop from architectural programming to post-occupancy evaluation". Therefore, it is necessary to further refine the indicators associated with social factors and advance the sophistication of the evaluation system from a theoretical perspective. --- Exploration of Evaluation Indicators Based on Field Surveys In order to better understand the situation of tourism village development and villagers' feedback, field surveys were conducted several times from 2022 to 2023 in the Jiufeng Mountain area of Fujian Province. A total of 12 tourist traditional villages were visited , and semi-structured interviews were conducted with 19 representatives, including indigenous villagers, new villagers, village officials , local experts, and government personnel, to understand their views on the development of rural tourism . The survey found that, in addition to the construction of the physical environment, non-physical factors, such as the improvement of income, interpersonal harmony, government and village committee diligence, and the congeniality of the business environment, play pivotal roles in determining villager satisfaction levels with tourism village development. The following factors received special consideration in the formulation process: --- • Increase indicators related to social relationship satisfaction. The secondary indicators are refined according to the five groups of indigenous villagers, new villagers, tourists, government personnel, and village committee and village officials. Such indicators can not only directly reflect the harmonious degree of interpersonal relationships, but also reflect the fairness of social distribution. --- • Increase indicators related to the rural esthetic satisfaction. For the prevalent problem of traditional dwellings' reconstruction and utilization in the tourism village development, these indicators aim to reflect the social satisfaction of tourism development through the villager satisfaction with the natural landscape, the design of new/rebuilt buildings, and the protection and inheritance of traditional culture. Additionally, the tendency of imitation in the design of newly constructed or renovated structures was incorporated to investigate the social demonstration of tourism development. --- • Simplify the relevant indicators for satisfaction with the physical environment construction. Objective evaluations of the physical environment construction generally align with the subjective attitudes of the villagers: as long as the relevant construction meets objective functional standards, villagers are generally satisfied. Therefore, there is no need to collect additional subjective opinions. Thus, only essential factors for mountainous area tourism development, such as road transportation, ecological environment, living conditions, water and electricity, lighting, and communication, were retained. --- • Simplify the phraseology and the size of the questionnaire. Due to the relatively low level of education of elderly villagers, the questionnaire should be easy to understand, and the total number of questions should be controlled at 20-30 and completed in 3-5 min to ensure the initiative of villagers and the quality of answers. The questions applied to the questionnaire are shown in Table 4. --- Data Collection and Result Analysis --- Source of Samples and Reliability and Validity Analyses In July 2023, the research team distributed satisfaction questionnaires to villagers in Beiqian Village and Longtan Village. A total of 58 questionnaires were collected and 53 were valid, including 29 in Beiqian Village and 24 in Longtan Village. All the satisfaction indicators are measured using a 5-point Likert scale in the questionnaire. --- Reliability Analysis Cronbach's reliability coefficient was used in this paper . The data show that the reliability coefficient of the questionnaire was 0.783, greater than 0.7 and close to 0.8. Therefore, it is indicated that the consistency of the answers is relatively high, and the reliability of the survey is acceptable. --- Validity Analysis The Kaiser-Meyer-Olkin test and Bartlett's test of sphericity were used for the validity analysis in this research . Ten samples were randomly selected for data calculation, and the KMO value of this study was 0.606, greater than 0.5, which means the high correlation between the score of each question and the total score. At the same time, the pvalue of Bartlett's test of sphericity was close to 0.000, lower than 0.005, which is a desirable structural validity to the survey. --- Results of the Villager Satisfaction Evaluation This research analyzed data through comparative and correlation studies. On the one hand, the satisfaction of various indicators in the two villages were calculated, and on the other hand, the two villages were compared regarding similarities and differences for the same indicator . First of all, the overall satisfaction of tourism development in both villages was high, with the same average of 4.79, and none of the respondents selected the "dissatisfied" or "very dissatisfied" options. These findings indicate a broad consensus among the villagers regarding their approval of the tourism development initiatives in both villages, signifying relatively favorable social outcomes. In terms of satisfaction with physical environment construction, the villager satisfaction with the roads and transportation of Beiqian was significantly higher than that of Longtan , while the satisfaction with the water and electricity supply and living conditions of Beiqian was significantly lower than that of Longtan . The satisfaction with the ecological environment and network communication of both villages was similarly high. Regarding the satisfaction with rural esthetics, the natural landscape, the design of new and reconstructed buildings, and the protection and inheritance of traditional culture have been highly appraised by the villagers of the two villages, indicating that the architecture and landscape design in the tourism development process of both villages were appreciated. Meanwhile, the villagers of the two villages exhibited a pronounced inclination to imitate the new and reconstructed buildings . In terms of economic and social development, Longtan villagers' satisfaction with income growth was significantly higher than that of Beiqian Village , but the First of all, the overall satisfaction of tourism development in both villages was high, with the same average of 4.79, and none of the respondents selected the "dissatisfied" or "very dissatisfied" options. These findings indicate a broad consensus among the villagers regarding their approval of the tourism development initiatives in both villages, signifying relatively favorable social outcomes. In terms of satisfaction with physical environment construction, the villager satisfaction with the roads and transportation of Beiqian was significantly higher than that of Longtan , while the satisfaction with the water and electricity supply and living conditions of Beiqian was significantly lower than that of Longtan . The satisfaction with the ecological environment and network communication of both villages was similarly high. Regarding the satisfaction with rural esthetics, the natural landscape, the design of new and reconstructed buildings, and the protection and inheritance of traditional culture have been highly appraised by the villagers of the two villages, indicating that the architecture and landscape design in the tourism development process of both villages were appreciated. Meanwhile, the villagers of the two villages exhibited a pronounced inclination to imitate the new and reconstructed buildings . In terms of economic and social development, Longtan villagers' satisfaction with income growth was significantly higher than that of Beiqian Village , but the satisfaction with public entertainment activities was lower than that of Beiqian Village . The two villages' satisfaction with medical and health services were similar. Considering social relationships, Longtan villagers were more satisfied with the government and village committee , while Beiqian villagers were less satisfied . The villagers of both villages had a similar satisfaction level with the relationship with indigenous villagers and are welcoming towards tourists and new villagers. --- Discussion --- Validity Analysis of the Villager Satisfaction Evaluation Method Based on the above experimental results and the actual situation of the two villages and their tourism development trajectories, it was demonstrated that the tourism village development evaluation method proposed in this paper has a certain degree of validity. This validity is primarily reflected in two aspects. On the one hand, the quantitative results of the questionnaire evaluation are consistent with what the research team learned from the interviews with indigenous villagers during the fieldwork. For example, the residents of both villages exhibit a high level of hospitality and a welcoming attitude towards tourists and new villagers. During the visit, as "tourists", the research team distinctly felt the warmth and enthusiasm from the local villagers. When discussing the "new villagers", indigenous villagers also expressed appreciation for the homestays and other investment projects initiated by the "new villagers". Encouraged by them, the indigenous villagers are eager to try similar ventures themselves. Especially, some of the villagers who work in the city think it is a better choice to return to the village for entrepreneurship if the income is comparable to or even slightly lower than working outside. In terms of villagers' income satisfaction in the questionnaire, satisfaction in Beiqian Village was significantly lower than that in Longtan Village. During the actual visit, Beiqian Village was obviously more deserted, and many tourism projects were in a half-closed state, because most of the projects in Beiqian Village are held by the government and relevant tourism companies with higher operating costs, while the indigenous villagers are unable to obtain direct income from tourism development, and the limited flow of visitors is unable to drive enough consumption to support the villagers' entrepreneurial projects. Longtan Village, on the other hand, was significantly more bustling, with a large number of indigenous villagers using the idle space of their homes to operate small-scale entrepreneurial projects, such as tearooms, bars, homestay accommodations, and kiosks, which have flexible opening hours and almost no fixed operating costs; therefore, the projects can continue to operate, resulting in a significantly better visitor experience than that of Beiqian Village, as well as attracting a greater customer flow, which in turn boosts the tourism revenues of the entire region. On the other hand, the evaluation method reflects the differences in tourism development paths of different villages through refined social level indicators. In terms of villager satisfaction with social relationship, the indigenous villagers in Longtan Village were more satisfied with government departments and village committees than those in Beiqian Village. The reason for this may lie in the fact that, although the government has played a leading role in the tourism development of both villages, the government in Longtan Village has been more restrained in its involvement, and the logic behind policy implementation has been more transparent. The village committee has also established a harmonious and trusting relationship with both indigenous and new villagers through systemic innovations, such as the "Gong Liao method". In contrast, despite the government's greater financial support for Beiqian Village, indigenous villagers are less involved in rural settlement construction and the planning of tourism activities, and even the interviewed village carders did not understand the reasons and the importance of the traditional buildings' protection policies. However, in terms of the degree of imitation for new or renovated buildings, Beiqian Village surpasses Longtan Village. This may be due to the fact that Longtan Village adopted the "old house renting" model, in which the renovation of houses is led by the personal preferences of the tenants, resulting in a wide variety of architectural styles. On the other hand, the houses of Beiqian Village are based on the government-led top-down model, emphasizing a stronger overall and coordinated appearance, along with a better functionality and quality. Therefore, it lends itself better to imitation and is well-received. In summary, in the selected villages, the objective quantitative results obtained in this study coincide with the subjective opinions collected during the field visits, reflecting the effectiveness of the evaluation method in collecting the real opinions of indigenous villagers, and the "indigenous villagers' opinions" also correspond to the specific conditions of the tourism village development, which highlights the essential value of the "indigenous villagers' opinions" in tourism development. --- The Process of Introducing Villager Satisfaction into the Evaluation Method In addition to proposing the villager-satisfaction-based evaluation methods for tourism village development in mountainous areas, the generation process is also worthy of reference for the same type of research. The process of proposing and applying the evaluation method in this paper was as follows: x Formulate the question: How can the opinions of villagers be effectively and adequately expressed? y Focusing on the research subject: Mountainous tourist villages face challenges during development and post-operation stages. z Theoretical research: Summarize the classification logic and the pros and cons of the commonly used evaluation indicators in existing studies, serving as the theoretical foundation. { Experiential research: Gather insights from interviews to understand villagers' perspectives on local rural tourism development, providing an empirical basis. | Design the evaluation indicator system: Building upon the theoretical and empirical foundations, propose primary and secondary evaluation indicators and transform them into implementable questionnaires. } Correlation analysis: Compare questionnaire results with the actual village conditions to verify the feasibility of the evaluation method. ~Address the issue: Provide research findings as feedback to the leaders of village collectives and the government departments responsible for rural tourism development, thereby enhancing the villagers' voice in the process. The above process was organized around the core concept of "villager satisfaction" and achieved a complete research loop of the exploration of its meaning, theoretical and empirical research, experimental application, and feedback from case studies. Regarding this practical issue, we achieved a dual exploration of both theoretical and applied methods. --- Application Scenarios of the Villager Satisfaction Evaluation Method In fact, in addition to being a means of "post-occupancy evaluation", there are various potential application scenarios for the "villager satisfaction" method. For example, conducting villager satisfaction surveys at different stages of tourism development can provide a better understanding of villagers' attitudes, opinions, and feedback regarding tourism development. During the demonstration and planning stages of tourism development, a survey about the existing tourism development examples can be conducted among villagers. This helps to identify their main concerns of tourism development and allows the optimization of tourism project planning based on the feedback collected. This ensures that tourism projects receive community participation and support from the very beginning. Tourism village development is limited by the scale of investment and is generally progressive. Therefore, it is advisable to periodically organize community meetings or collective discussions during the tourism development process, conducting surveys on villager satisfaction. This helps to adjust and improve the project in a timely manner and to avoid the expansion of hidden dangers or the intensification of conflicts. After tourism development has become relatively mature, it is more necessary to conduct villager satisfaction evaluations to summarize the experience and assess the effectiveness to provide a reference for future development. Of course, it is important to note that different development stages, paths, and types of rural areas will require tailored evaluation criteria based on field investigations to more accurately and authentically reflect the villagers' opinions. --- Conclusions The main concern of this paper was the presentation of the theoretical research and questionnaire design of the evaluation methodology. Based on two specific mountainous villages, and based on the literature research and field surveys in the Jiufeng Mountain area, an evaluation method based on villager satisfaction, containing 5 primary indicators and 19 secondary indicators, was designed and transformed into a questionnaire provided to the villagers. The application of this questionnaire in Beiqian Village and Longtan Village in the Jiufeng Mountain area confirmed the distinct characteristics of tourism development and development models in the two villages. This, to some extent, validates the scientific and effective nature of the evaluation method. This paper presented a comprehensive research cycle that begins with practical issues and incorporates theoretical and empirical research. It also balances macro-scientific methods with specific analysis and application. Furthermore, the chosen research subjects had a certain level of representativeness within the context of China, making this research particularly relevant and practical for addressing current issues in rural development and tourism village development in China. --- Data Availability Statement: Data sharing not applicable. No new data were created or analyzed in this study. Data sharing is not applicable to this article. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. ---
Villager-Satisfaction-Based Evaluation Method of Tourism Village Development-A Case Study of Two Villages in China.
prestigious institutions and professional connections that could ultimately play an essential role in promoting dancers' success. However, there is a lack of systematic research to quantify the effects of social network connections and prestige on the career success in ballet. Hence, by investigating the social drivers of success in ballet, we do not only inform about the social mechanisms of this performing art and our cultural heritage, but also can directly test the tools of science of science in another creative domain. Our research delves deep into the complex world of the ballet academic system and its relationship with social prestige and career success. While awards and high achievement are undoubtedly crucial in attaining social recognition [40][41][42] , we propose the use of network centrality as a more precise indicator of social prestige, as it underscores the critical role of social connections in enhancing prestige 43 . We hypothesize that the prestige of a school facilitates the professional development and job placement of its students, which ultimately elevates the school's external prestige measured by the number of professional dancers they produce, something that has been observed in other creative fields 12,21,44,45 . Thus, dancers may leverage this principle by affiliating with prestigious ballet academies that provide access to a larger network of dance professionals promoting talented dancers. As a proxy for dance performance, we use competition outcomes of over 6000 young dancers competing at the Youth America Grand Prix from 2000 to 2021. The YAGP competition system filters the participants to the most promising dancers, hence providing a unique opportunity to capture the desired technical and artistic attributes in the ballet market based on jury assessment. The YAGP awards competition medals based on technical and artistic proficiency; and the Grand Prix, an award based on the subjective appreciation of the jury. Although multiple biases in performing arts competitions are possible 46 , medals and awards have been long used as an objective metric of performance in different domains [47][48][49][50] . Thus, the YAGP competition outcomes represent an objective instrument derived from an efficient system of expert's opinion evaluating ballet performance. Using the YAGP data, we build the network of ballet academies from their students participation in the competition and create a ranking of ballet academies by their betweenness centrality, which functions as a validated network-based indicator of prestige. Next, we align students' competition outcomes with the academic ranking of their affiliations to predict the job placement of ballet students. Overall, our analysis unveils the ballet preprofessional landscape by underscoring the critical role of school prestige in the selection of dancers, even at an equal proficiency level of performance. Ultimately, our research broadens the scope of the science of science methodologies to the performing arts, empowering us to identify the impact of institutions on the young dancer's careers. This research also contributes to understanding the multifaceted influences of social prestige on career success. Within ballet, the quantitative understanding of network influences on dancer success may also inform equitable policies for auditions and affirmative action that can support a fairer evaluation of candidates in the ballet industry and other professional areas where creative performance is essential. To the best of our knowledge, our study is the first attempt to systematically investigate the effect of social drivers on success in ballet, and contributes to the general understanding of the social contexts driving human creativity, that also broaden the understanding of the evolution of performing arts and our cultural heritage. --- Results --- Network of ballet schools The Youth America Grand Prix competition plays a pivotal role in supporting young ballet students by fostering connections with a network of dance professionals and academies of international presence. Our hypothesis is that the systematic positioning of schools as top contenders in the competition establishes a hierarchical prestige within the network of ballet schools. This prestige, derived from competition outcomes, subsequently impacts the social system of the ballet industry, leading to a more systematic distribution of awards, job placements, and resources such as scholarships for attracting talented students. Theories of social stratification vary in their arguments about the formation of social prestige, yet one dominant theme is achievement, conceptualized as a source of social stratification and hierarchical order 41 . Metrics related to achievement have been used to understand the role of prestige for career success in academia 3 and faculty hiring 12 . On the other hand, the implied hierarchical differences among individuals in a social context can be captured by network metrics, like network position or connectedness, which are useful as indirect measures of social prestige 43 . For instance, research on the visual arts has demonstrated that network position objectively captures social prestige and is a good predictor of career success 21 . Grounded in this approach, we construct the network of ballet academies from the YAGP data and create a network-based ranking using each school's centrality, a key contribution of this paper. In the YAGP co-competition network, schools are represented as nodes, and a link is established between two schools if their students were ranked among the top 12 in the same competition venue. This network comprises 1603 ballet schools and 55,778 links, providing a comprehensive representation of the ballet academy ecosystem The co-competition network is constructed from both the multiple regional semi-finals and yearly finals competition stages. Thus, the link between two schools captures that both schools were able to produce top dancers under the same competition setting and reflects a degree of similarity in training quality. Connectivity within the co-competition network thus forms an ordered hierarchy in which schools' high achievement contributes to social prestige which is then directly perceived by others [40][41][42] . Specifically, highly connected schools in the co-competition are more likely to repeatedly have top dancers in the competition relative to their less connected counterparts. At the same time, these dancers competed against many different schools, thereby increasing their schools' visibility within the community, as opposed to schools who only competed against the same subset of competitors. Finally, we capture network effects in the perception of prestige by noting that visibility of ballet schools is further influenced by their potential to bridge between communities in the network. We quantify the bridging capacity, and thus the schools' social prestige, by the betweenness centrality in the co-competition network. Betweenness centrality is computed for each node, k, based on the sum of all-pairs shortest paths which pass through that node: Each node is a ballet school, and two schools are connected if they obtained a top student in the same competition venue. Node size and color reflect schools' normalized betweenness centrality, B k . Node position is determined by the force-directed graph layout with force estimation θ = 0.5 that emphasizes the separation of nodes into clusters. The weak structure shows dense connectivity within network clusters and sparser connections between clusters and to the periphery; the strong structure , comprises 166 nodes and 384 edges . This network representation illustrates schools' hierarchical structure explaining the role of network position for social prestige. where σ denotes the number of shortest -paths, and σ is all the shortest paths passing through node k 51 . To visually capture the role of betweenness centrality in the network structure, we extract the multiscale network backbone 52 . This method uses a parameter α for the probability of the existence of an edge and reduces the network to the most fundamental structures and hierarchies based on multi-scale interactions and their relative relevance for the network topology. The resulting network is shown in Fig. 1, where we observe that most schools only attain regional success captured by their low betweenness and weak connections . On the other hand, the network's strong edges connects 166 nodes , forming a core backbone of ballet schools in strategic positions to gain national attention and prestige. To validate the use of betweenness centrality within the YAGP co-competition network as a proxy for social prestige of ballet schools, we compared the network ranking to a selection of top schools as identified by leading ballet experts. Ballet experts offer comprehensive understanding of dance and the ballet ecosystem, and are widely recognized for their long-standing existence and influence within the dance community in the United States . Here, we aggregate a list of Top Ballet Schools selected by Dance Magazine and the highly regarded blog A Ballet Education, in total capturing the top 60 ballet schools in the United States . We then quantify the extent to which the most prestigious schools as ranked by betweenness centrality recovers the experts' opinions using the AUC, or the probability that our measure ranks a school listed in the Top Ballet Schools higher than a school not on that list. The AUC is a score between 0 and 1, where a value closer to 1 indicates a probability of correct classification, while a score close to 0.5 indicates that the model performs no better than random guessing. We find an AUC of 0.75 indicating a fair alignment between betweenness centrality and experts' assessment of the social prestige of ballet schools. Further, betweenness centrality performs better than simpler measures of school prestige and achievement, including the ratio of winning awards or co-competition degree . While achievement is certainly a crucial factor in attaining social recognition, our findings suggest that betweenness centrality offers a more accurate measure of social prestige, as it captures the critical interplay between social connections and prestige. As a whole, our results provide evidence that key network patterns, such as bridging between communities, is closely related with school's social prestige in the YAGP co-competition network. These findings highlight the utility of network analysis in understanding the relationship between achievement and social prestige. --- Career success of ballet dancers The hiring process for ballet dancers is limited in opportunities and influenced by a variety of factors including training technique, technical mastery, artistic ability, and even demographics. However, we argue that social prestige plays a significant role beyond performance or ability in predicting the success of dancers' careers. To understand the influence of social prestige on a successful job placement, we align the aggregated competition outcomes from 6393 students within the professional age range to the highlighted jobs reported in the Success Stories by the YAGP. In total, 385 young YAGP alumni received a job placement in a dance company. Surprisingly, 22% of YAGP alumni with a job placement did not receive any award in the competition while 10% won in both the semi-finals and finals, and 9% was a finalist but was awarded only in the semi-finals. Moreover, the majority of dancers who received a job placement won at least one award in the semi-finals but did not advance to the finals. This breakdown suggests that there are different routes and factors other than achievement driving the selection of dancers towards a job placement in a ballet company. To investigate the intertwined effect of individual achievement and social prestige on job placements, we build a logistic regression model to predict which students are placed into a dance company job. Our dependent variable is success S, measured as a binary outcome, where S i = 1 if student i obtained a job placement in a ballet company and S i = 0 otherwise. The independent variables include the aggregated measures of students' achievement within the YAGP competition, such as total awards by type and total number of competitions , as well as the normalized and re-scaled schools' betweenness centrality measure for social prestige . To control for potential confounding factors, we also include a control variable of the student's gender . Our primary model is specified as follows: We observe a strong positive effect of prestige on job placement . Moreover, our analysis reveals a significant increase in the probability of job placement along with schools' increasing prestige . For example, consider two comparable ballet dancers, Lauren and Juliet, who both won one gold medal after two competition appearances, but who attend schools of differing prestige: Lauren attends a school with prestige 0.87 while Juliet's school has a prestige of 0.09. Our logit model predicts that despite their identical competition performance, Lauren's probability of a job placement is 2.25 times higher than for Juliet. Next, we test for the potential effect of advancing to the competition finals on job placement by adding a dummy variable for being a finalist or not . In this second model, we observe a strong effect on the probability of a successful job placement , which is comparable to the effect of being affiliated to the most prestigious schools . This comparison suggests that being B k = σ ∈K σ σ Pr = Logit -1 β 1 Gender i + β 2 Bronze i + β 3 Silver i + β 4 Gold i + β 5 GrandPrix i + β 6 Competitions i + β 7 Prestige ki + ǫ i --- B A Figure 2. Probability of job success in ballet. Success is defined in a binary fashion, where P if the student obtained a job placement in a ballet company, and P otherwise. Demonstrates a significant positive effect on the predicted probability of a job placement with the increase of school's prestige. Shows exponentiated odds ratio with corresponding 95% confidence intervals for the effect on job placement of each additional unit of competition outcomes and institutional prestige. Baseline in 1 indicates no effect. We see that the Grand Prix has the largest effect by type of awards, while long competition trajectories can be detrimental for a job placement, and being a finalist is comparable to be affiliated to ahighly prestigious school. Model coefficients reported in Table 1. a finalist can greatly enhance the career prospects of talented students who attend less prestigious schools, and highlights the significant impact of high performance on a job placement. Our logistic model can also reveal more detailed effects of medals and competitions on job placement. Intuitively, examining medals by type , the odds ratio increases with medal importance: winning a bronze medal increases the odds of a job placement by 30% compared to a no medal baseline, while one additional gold or silver medal increases those odds by about 50%. The greatest impact of awards on a student's odds of attaining a job placement comes from winning the Grand Prix, a special recognition given on the subjective appreciation of the jury, increasing the odds of a successful job placement by 67%. This suggests that the recognition of a dancer by the jury is highly aligned with the value system adopted by ballet companies, much more than winning multiple competition medals awarded on a technical scoring system. Our analysis also highlights an unexpected finding: a long competition career may negatively impact job placement. On average, students participate in two semi-final competitions regardless of their job placement outcome. However, our analysis shows that each additional semi-final competition decreases the chances of a job placement by 18% , which indicates that students who participate in multiple competitions may not improve their chances to be recruited. Overall, we find that school prestige has the largest effect in determining job placement, with the odds increasing by over 200% for students who attend the most prestigious schools, and this effect is robust across all models . Yet, while our results also emphasize the importance of high performance as a key factor for career success, these models are unable to disentangle potential interactions between performance and prestige. To further elucidate the role of school prestige on individual job placement, we conduct an experiment in which we match students who have identical medal and competition counts, but who differ on their school's prestige. Here, the YAGP medal counts function as a proxy for dancer ability, empowering us to measure the influence of prestige beyond performance. We consider a binary treatment status denoted as Y i = 1 for the students affiliated to a prestigious school, and Y i = 0 for students who attended a less prestigious school. The subset of prestigious schools comprises the top 5% of the network-based ranking of prestige. Under this criteria, we assign 93 top schools as Y = 1 , resulting in 2301 treated students and 3780 controls . We match the observations with the exact matching method using MatchIt 53 . The exact matching is performed over the quantified variables of individual achievement, including: total number of each competition medal ; and the total number of competitions, both listed only in the semi-finals. Finally, the matching model can be described as: where S is the job placement outcome, Y is the treatment indicator, X contains the vector of covariates used for exact matching, N is the number of subclasses formed in the matching process, and j is the number of controls used to match the treated observation . We compute the matching method specifying the Average Treatment Effect as the estimand and heteroscedastic-consistent standard errors based on subclasses 54 . We observe that, by comparing equally skilled dancers, i.e. students who have exactly the same competition outcomes, there is a significant increase of 65% in the odds of obtaining a job placement for those who attended a prestigious school. This smaller effect size compared with the logistic regression model 2 , where we observe an effect of 200% on the odds of obtaining a job placement, occurs because the matching experiment accounts for potential interactions between school prestige and medal counts by comparing equally skilled individuals within subclasses . Our results show that the effect of social prestige is reduced from a 200% increase to a 65% increase after matching based on performance. This suggests that we successfully decreased the bias in our estimation due to observable confounders. These findings suggests that even though dancers can obtain a similar number and type of competition medals, an indicator of similar ability and performance, their affiliations play a crucial role in their careers, which ultimately influences their professional positioning in a ballet company. Given the positive impact of school prestige on job placement, we further investigate if students who change schools move to a more prestigious school. While 85% of all participants only reported one school affiliation, the remaining 15% attended from two up to five schools. From the students who changed schools, 85 students obtained a successful job placement. To capture the difference in schools' prestige, we first measure the change in prestige from students' first and last schools. Then, pairing each student by their first and last school, we find a difference of 0.086 in the schools' prestige between the last and first school , indicating that students tend to move to a more prestigious school . We further examine the school change by comparing the first and last schools by students' job placement outcome. We measure the difference in schools' prestige with a two-way ANOVA test revealing that the change in schools shows an increase in school's prestige in general . Moreover, we observe a larger increase in school's prestige for students who obtained a successful job placement . The interaction effect was also significant . The difference in the change of schools' prestige by each group can be seen in Fig. 3. Finally, we investigate the impact of awards on the careers of students who change schools. For each of the 795 students that only changed a school once, we control for the highest award received while at the pre-change school. We then measure the difference in school prestige for each student as Prestige = Prestige School 2 -Prestige School 1 ; thus a �Prestige > 0 reflects an increase in affiliation prestige, �Prestige < 0 a decrease in affiliation prestige, E -E = 1 N N i=1 ) and Prestige = 0 is no change. Using a One-Way ANOVA test we observe no effect of the highest award of School 1 on the prestige of School 2 = 0.565 , p = 0.687 ). Because Prestige is not normally distributed , we confirmed this analysis with a Kruskal-Wallis test and found no statistical effect = 4.5836, p = 0.3327 ). These results suggest no relationship between previous awards obtained in School 1 and the change in school's prestige of School 2, and may be a indicator that the change of students to more prestigious schools could be driven by other mechanisms, such as self-selection or peer effects. The distribution of Prestige across awards levels obtained in School 1 are shown in SI Fig. S7. Overall, our results emphasize the importance of social prestige for a successful job placement in ballet and show that students may have access to more prestigious institutions over time. --- Summary and discussions In summary, our research highlights the usefulness of the science of science methods as an efficient tool to quantify career patterns in creative professions that were not possible to elucidate before. The joint use of science of science and network science allowed the identification of the leading ballet academies in the US. This contributes to expand the general understanding of the arts academic system in the US and its relationship with reputation and prestige 55 . Moreover, our work also demonstrates that features of artistic careers can be quantified, and emphasizes previous efforts of researchers investigating the different factors driving the evolution of the arts in an objective fashion 49 . Our work unveils the importance of both individual competition performance and schools' prestige as predictors of successful job placements in ballet. By systematically measuring schools' prestige through network analysis, we demonstrate that social prestige is predictive of higher jury's recognition of students, competition advancement, and better career prospects. As a whole, we show that the social network remains essential to shape success in ballet's modern era, and illustrate the potential of data-driven methods to objectively analyze these effects in performing arts. The pursuit of a successful career in ballet often requires young dancers to give up their childhood, as demanding training regimes are essential to attain the level of athleticism and motor control necessary to execute complex, yet artistic, movements and sequences. Despite the rigorous physical preparation, the history of ballet suggests that the selection and advancement of dancers is influenced by more than just performance ability, and is strongly shaped by the prestige of social and professional connections. In the modern era, dancers can leverage this principle by affiliating with prestigious academies that provide access to the network of experts who play a critical role in identifying and promoting rising stars. Through our examination of the network of ballet academies in the United States, we provide a network-based ranking of these academies, and reveal the hierarchical social stratification of prestige within the ballet academic environment. This validated network-based measure of prestige in ballet complements similar measures of prestige in academic careers 12,44,45 , visual arts 21 , and the movie and music industry 15 . For instance, being central in a collaboration network of performing artists correlates with the better allocation of resources and more impact of creative performance 20,22,23 . In the field of visual arts, the top 20% more prestigious art galleries and museums predict a 58.6% higher individual reputation, which also relates to higher sales rates and a longer career 21 ; while our results show an effect of 65% to secure a job placement in a ballet company from being affiliated to the top 5% more prestigious schools. A similar effect of early career recognition and institutional prestige has been reported in academia for career development and scientific impact 56,57 . Interestingly, our study emphasizes the G G G G G G G G 0.0 0.1 0.2 0.3 first last School School's Prestige G G G G success other Figure 3. Change in school's prestige for transfer students. The change in school's prestige from students who attend more than one ballet school during their participation in the competition. We observe that students move to more prestigious schools, and students with a successful job placement move to even more prestigious schools compared to those without a job placement. short-term effect of pre-professional competition awards and a negative effect of multiple competitions on the successful job placement of dancers in their early career. In contrast, similar analysis on academic careers suggests a cumulative advantage of early achievement for future rewards and recognition 58,59 . However, our analysis only captures the short-term effect of awards and social prestige on job placement, and may be subject to a selection bias in which successful dancers are not incentivized to compete in additional competitions. Future research can help reveal dancers' career dynamics and the cumulative advantages of early social recognition for promotion and role allocations within the context of company turnover rate and market demands. Overall, the nuances of network effects across creative domains raises the question of the role that social complexity plays for career success, also considering other factors such as the nature of connections formed over time 60 or embedded formal and informal norms 61 . Thus, additional comprehensive longitudinal data on dancers' careers and company structures could help further investigate the long-term effects of dancers' achievements and institutional prestige for career development and longevity. The ballet industry is renowned for its limited job opportunities and high competitiveness. Our research shows that ballet companies often exhibit selection biases based on the dancer's affiliations. This is a common issue in competitive settings, where evaluators find it challenging to differentiate between similarly talented candidates 38,62 . In such cases, evaluators tend to make their selections based on social cues, such as the prestige of affiliation, and personal biases. Thus, the relationship between affiliation prestige and dance ability is complex, and may involve reverse causality. On one hand, a prestigious institution may attract high-quality dancers by means of specific recruitment criteria, which can in turn reinforce dancers' prestige and provide access to better training opportunities. On the other hand, a high-quality dancer may also enhance the prestige of their affiliation as a result of their talent. To counteract selection biases, an adequate implementation of blind auditions could increase fairness in the selection of talented candidates from less prestigious institutions 63 . Several limitations of our research should be taken into consideration. First of all, our data is limited to the YAGP competition outcomes and presents a school-level metric of social prestige to measure the impact on successful hiring at an individual level. In addition, metrics of individual performance may as well imply endogeneity imposed by high achievement and job placement. While our matching experiment controls for equal individual performance, the experiment assumes that the treatment of being in a prestigious school does not depend on the treatment assignment of other individuals. However, ballet schools may be able to accommodate only a fixed number of students, and elite schools may restrict enrollment to maintain exclusivity or prestige, which may limit the treatment assignment of being in a prestigious school. Thus, the assumption of no interference in treatment assignment may not hold in the context of ballet schools, and may require specialized statistical techniques to address this issue in further research. Moreover, our analysis does not account for omitted variables and unobserved factors such as certain standards of beauty, behavior, technique style and repertoire, years of experience in ballet competitions, personal choices, market demands, and other attributes that ballet companies may consider in their hiring process. In a similar fashion, the data analyzed here does not hold information about the judge pool for the competition and the hiring teams of ballet companies, which may influence the allocation of awards and job placements. The nature of the data also limits our ability to measure peer effects on competition or career achievement 64,65 derived from the competition setting. Yet given that our measure of prestige may be confounded by peer effects from the competition setting, thus it may capture these potential effects due to its aggregated nature. However, a deeper investigation on this matter would help understand how dancer's decision is shaped by school changes, persistence in the competition, or pursuing a professional career in relation to their local network 66 and the behaviors of their peers. Similarly, we were unable to capture the influence of other individual dancers' rewards, including scholarships for summer or yearly training in prestigious academies. Although our findings provide insights into the potential benefits of affiliation with a prestigious school for career success in the ballet industry, it only reflects the hiring process for YAGP participants and may not be representative of the entire population of young ballet dancers. Also, there are several other U.S.-based and international competition substitutes to the YAGP, like the World Ballet Competition or the Prix de Laussane, which could similarly influence student career outcomes. While our measure of success currently focuses on job placements as company dancers, it is important to recognize that a successful career in ballet can encompass a variety of roles, including teaching, choreography, and administrative duties. Therefore, there is a need for more comprehensive data to investigate the career paths of ballet dancers, from pre-professional to professional levels, allowing our definition of success to include diverse career paths. With richer data, we can further investigate the complexity of human capital and the job market 67 within creative professions. This exploration would allow us to map socioeconomic variables shaping the structure and evolution of dancers' careers, while also exploring the interplay between factors like school choice, institutional prestige, skill hierarchy, and career success. Lastly, we hope that this work raises the attention of how important is school choice for the dancers' future and how interdisciplinary research contributes to the understanding of human creativity at a social level, which can ultimately inform about the underlying mechanisms driving the evolution of arts and our cultural heritage. --- Methods --- Dataset We use publicly available data from the YAGP online platform 68 . The data used in this study was collected from the YAGP Winners' report and the alumni success stories . We employed the BeautifulSoup Python library for web scraping 69 , adhering to ethical guidelines and terms of service of the platform. To protect the privacy interests of the dancers, their names are anonymized by converting them into sequenced numbers, which facilitates their handling, and the identity key was stored separately. Our data collection and research methods were approved on January 18th, 2023, by the Institutional Research Ethics Committee of Universidad del Desarrollo, in Chile. In addition, the use of the public data resources was authorized by Larissa Savaliev, director of the YAGP. The data contains the competition results of 10,686 students and 2402 schools participating from 2000 to 2021. We subset the data to only include the 6363 students listed from competition venues within the United States for a robust representation of the competition system. This selection of students comprises those in the professional age range, which filters out students from 'Pre-Competitive Age Division' after 2014, and 'Junior Age Division' after 2019. All students from the 'Senior Age Division' are considered for this analysis. In total, our student population is 7% Pre-Competitive Age Division, 28% Junior Age Division, and 65% Senior Age Division. To disambiguate the students and schools, we first checked for misspellings and punctuation. We then performed an exact name matching that leverages middle names and/or initials to distinguish identity, for both students and schools' names. The final data contains 6475 participants, from which 6,393 students are affiliated to any of the 1603 ballet schools found in the data. We infer the gender of students using the gender package for R, a method of binary gender inference that matches names with their gender as found in the package standardized databases 70,71 . This method's estimation uses the probability of finding a gender assigned to a given name; when the probability is larger than or equal to 0.7, then the gender is assigned to the name tested. Only 0.008% of students' gender was not identified and were removed from the dataset. Its important to emphasize that the inferred gender does not refer to the sex or self-assigned gender of dancers, but serves as an estimate of the social construction of gender. Also, students' reported gender can be confirmed in the YAGP website records if necessary. Overall, women represent 83% of the total population, representing a self-selection gender bias embedded in the competition system. --- Measure of social prestige The network of ballet schools is represented as G = , where K is the set of schools and V the set of connections between schools . Hence, there is an edge between two schools k 1 , k 2 ∈ K so that v ∈ V if their affiliated students are listed as top student in the same competition venue and year. From this network, we compute the unweighted betweenness centrality, following Eq. 51 . Betweenness centrality is then normalized with the Min-Max scaling method to have a linear range between [0, 1], where B k = 1 corresponds to the most central school. We then order the schools by their normalized centrality and create a ranking list using a dense rank function, which generates rank ties for observations with the same centrality values. The rank r of a school k by its centrality is assigned in an ascending fashion, so that r k = 1 is the largest centrality value, and r k = 945 will have the lowest centrality in the set of schools K. We provide a detailed explanation of the validation methods regarding the ranking of social prestige in SI, Section S2.3, where we compare our ranking with third-party lists of prestigious schools selected by dance experts. --- Data availiability The datasets generated and/or analyzed during the current study are available in the Zenodo repository, link here. Received: 15 June 2023; Accepted: 10 October 2023 --- --- Competing interests Y.H.-G., A.J.G., and C.C. declare no competing interests. A.-L.B. is co-scientific founder of and is supported by Scipher Medicine, Inc., which applies network medicine strategies to biomarker development and personalized drug selection, and is the founder of Naring Inc. that applies data science to health and nutrition.
In the recent decade, we have seen major progress in quantifying the behaviors and the impact of scientists, resulting in a quantitative toolset capable of monitoring and predicting the career patterns of the profession. It is unclear, however, if this toolset applies to other creative domains beyond the sciences. In particular, while performance in the arts has long been difficult to quantify objectively, research suggests that professional networks and prestige of affiliations play a similar role to those observed in science, hence they can reveal patterns underlying successful careers. To test this hypothesis, here we focus on ballet, as it allows us to investigate in a quantitative fashion the interplay of individual performance, institutional prestige, and network effects. We analyze data on competition outcomes from 6363 ballet students affiliated with 1603 schools in the United States, who participated in the Youth America Grand Prix (YAGP) between 2000 and 2021. Through multiple logit models and matching experiments, we provide evidence that schools' strategic network position bridging between communities captures social prestige and predicts the placement of students into jobs in ballet companies. This work reveals the importance of institutional prestige on career success in ballet and showcases the potential of network science approaches to provide quantitative viewpoints for the professional development of careers beyond science. Quantifying the processes and behaviors through which some individuals attain success in creative careers is challenging due to multiple factors, including the subjective valuation of creative performance, the multifaceted ways in which success can become manifest through recognition 1 , and data scarcity 2 . However, the recent proliferation of large digital databases capturing many aspects of scientific careers has fueled advances in data-driven methodological tools to capture career and collaboration patterns, productivity, and impact in science. For instance, the field of science of science 3 has unveiled the random impact rule governing the timing of a researchers' most consequential publication 4 , how authorship team composition influences productivity patterns and impact [5][6][7] , the enduring influence of scientific advancements, technological innovations, and cultural products 8-10 , and the tracking of scientific careers and hierarchy on the faculty job market 11,12 , to name a few. The extension of these methods from science to other creative domains has advanced the understanding of the dynamics of artist careers. For instance, recent works elucidate individual transition patterns towards high-impact work 13 , the collective impact of substantial works on long-term success 14 , and the role of luck and individual ability as career success driver 15 . In both scientific and artistic careers, where performance is subjectively appreciated, career success is strongly influenced by social prestige and visibility 1,16,17 . This suggests that artists' career success is highly dependent on their social networks and prestige. Previous research implementing quantitative tools from science of science and network science demonstrate the usefulness of these tools to map how social networks shape cultural endeavors 18 . For example, structural properties of teams and collaboration networks in performing arts are strong predictors of artists' productivity 19 . In addition, network analysis suggests that the position of artists or teams in social networks plays an essential role in allocating resources and rewards and predicts future impact 20,21 , brokerage
I. INTRODUCTION --- A. Emerging Concepts and Change In a globalized world, "skills" is the key word to survive. With the emergence of new needs and expectations, costumer's awareness has increased, and competition has become severe with technology emerging at an astonishing rate [1]. In fact, the ideal employee is desired by all organizations, however one may ask "Which dimensions should encompass the profile of a valuable employee?". Due to cultural and organizational strategies and objectives, this seems to be an ever changing variable. Nevertheless, some abilities are common and can differentiate a valuable professional from an average one. Frequently, managers think of a valuable professional as that one promoting organizational growth, financial return and company's costs reduction [2]. These correspond to the fundamental skills that any company will expect from a new employee and now seems the proper time to think of what makes us different and what companies are looking for. Thus, conducting studies that point out the crucial skills that employees must encompass so they can meet success, is as fundamental as understanding the new emergent terms in industry [3]. This study was carried out aiming at understanding and establishing the skills that a valuable quality professional must held in the 21 st century. Usually a gap emerges when comparing what quality professionals put into practice and what companies actually expect from them [4]. Nowadays, several new topics emerge across industry such as industry 4.0, learning factories, internet of things and digitization [5]. Given the novelty that arises from these trends the quality professionals must adapt to a changing world and should develop the proper skills which will allow them to deal with such challenges. Quality professionals will, inevitably, follow the trend through digitized technologies and new tasks will emerge, as displayed in Table I. Setting up a digital protocol for testing the quality of the manufactured product, the production process being carried out; Creation of digitally controlled process improvement systems With the emergence of these tasks , it seems that this research topic-The profile of the Quality professionals in the 21 st century-is an important and needed contribution to the field of quality aiming at understanding whether quality professionals are ready or not to face the challenges posed by this new industrial paradigm. --- B. Professional skills in a turbulent and ever-changing context In the 21 st century's competitive environment, it is important that companies understand these new trends and take advantage of them [7]. Technology is changing at a fast pace and Industry 4.0 emerged shedding light on a new industrial revolution [8]. Within the topic of Industry 4.0 one should highlight what is defined as Quality 4.0, i.e., the successful integration and synchronization of procedures, production and processes [9]. Quality 4.0 encompasses the digital transformation of management systems and product/service compliance by incorporating digitalization and related technology and, thus, changing the working roles as shown in Figure 1 [10]. It is of the utmost importance that quality professionals are accurately informed about the developments in quality management, the implications of those developments and their role in this new age [11]. Each quality employee should learn about Industry 4.0 and Quality 4.0, contributing to the improvement of their profile as quality professionals [12]. In addition to these technological advances, in recent years, social responsibility emerged as a commonly addressed topic due to global changes. Social responsibility is no longer seen as a tool to attain credibility, but as an effective target and "Doing well by doing good" is being adopted by an increasing amount of organizations [13]. On this note, one should stress that in 2009, the American Society for Quality pointed out some new career paths for quality professionals, such as: systematic measurement for sustainable results; operational efficiency and cost savings; consumer preference for green products and services and regulatory standards focused on ethical behaviour [14]. --- C. Skills and Skills Gaps In an industrial context, there are several skills that are required regardless the position one apply for, commonly classified as soft skills [15]. On the other hand, there are many specific skills that are needed so one can be effective doing our work, being these classified as hard skills [16]. These two types of skills complement each other being of the utmost interest to find a perfect match of both in order to succeed-in the quality field in what matters to this study. The scientific community is already putting some effort in this topic, as some studies are already being conducted as an attempt to introduce these new skills in student's courses, directing their behavior towards a global approach [17]. Toddi Gutner and Mike Adams identified some skills in 'The Conference Board's Quality Council' and Jiju Antony presented the various existing perspectives concerning the future of quality professionals sustained on the opinions of a panel comprised by academics and practitioners selected from different countries [18]. A major insight was that rather than controlling and improving, quality professionals will focus on value creation and on innovative activities. In addition, these professionals will work more closely with customers and suppliers aiming at the establishment of a more effective supply chain. The authors also sustained that the forthcoming quality professionals must be aware of maintainability, reliability and serviceability of products, lean six-sigma, systems thinking and management for change. In fact, holding personal skills are crucial factors in business life to create value. In addition, these skills are of utmost importance to successfully translate knowledge into results and to increase quality of life. Non-technical skills can be developed with experience and long-term practice, not in the short run [3]. Conversely, technical skills are associated with employees' domain of work, technical competence or expertise. To develop technical skills, there are some techniques, such as: courses, seminars, technical certification and internet [19]. Non-technical skills are essential for skilled employees to balance their technical skills. Today, organizations are looking for employees who hold both technical and non-technical skills [20,21]. However, there still exists a gap between perceived skills and skills put into practice. This may be explained by the fact that one base convictions in information that does not match reality. Thus, new skills must be considered and evaluated in terms of importance to quality professionals. Based in this need, the research team presents a new set of skills supported on the results collected in a worldwide survey aiming at assessing their importance and relevancy to the current industry. --- II. METHODOLOGY A worldwide online survey-previously validatedwas held between April and September of 2018 aiming at the importance assessment of 27 skills potentially relevant to the Quality manager/leader role and previously identified through an extensive and comprehensive literature review. Exploratory factor analysis was adopted to identify/extract the latent factors/components. The exploratory factorial analysis establishes the correlation of observable variables and organizes them into factors, which in themselves are unobservable variables. Thus, it is able to make a complex study simpler, reducing a large number of variables correlated into few factors. In more detail: -In the first step of the research , a comprehensive literature review was carried out addressing skills, quality professionals' profile and new trends in industry; -In addition, the "Development of questionnaire" stage encompassed the questionnaire-online survey design, conception and development. The literature review pointed out several skills and, for the purpose of this research, the platform provided by University of Minho was adopted to develop the online questionnaire. The survey structure comprised two sections. Section 1 encompassed 11 questions intended to ascertain the profile of the respondents. Section 2 was sectioned into two parts. First part included specific questions encompassing 27 expressions addressing the most relevant skills of the quality professionals and second part encompassed three open-ended questions each of them asking the respondents' opinions regarding current skills of quality professionals and quality tools. Concerning the questionnaire validation one should stress that prior holding the survey online a pilot test was carried out among three experts in the quality area and by five practitioners . The survey was analyzed by group members and supported on their insights some changes were introduced aiming at the improvement of the survey. -Finally, the data collected with the dissemination of the survey all over the world was analyzed adopting the IBM SPSS statistical software. --- III. RESULTS --- A. Variables Codification In order to analyze the 27 variables that were presented in the survey, the research team adopted a code for each variable, presented in Table II. --- B. Reliability Analysis A reliability analysis was carried out on the perceived importance Likert type scale comprising the 27 items that were assessed by the respondents. Cronbach's alpha showed the questionnaire to reach acceptable reliability suggesting a high internal consistency and a reliable questionnaire . Additionally, none of the items results in an increase in the alpha if deleted . Results show that, if deleted, each item will output a decrease in the alpha . Exploratory Factor Analysis A factor analysis of the results was carried out. Bartlett's test of sphericity was significant = 2882.76, p<0,001), suggesting the appropriateness of using the factor analytic model. The Kaiser-Meyer-Olkin pointed out the solid relationships among the variables suggesting that was acceptable to proceed with the analysis . As previously stressed, 27 personal and professional skills that may impact on the profile of the 21 st century quality leader were analysed using principal component analysis . The communalities of each variable are acceptable with solely one variable having an amount of variance less than 50% in common with the other variables. This suggest that the variables are strongly related among them which is somewhat expected since they should reflect, ultimately, one construct. As previously stressed the KMO and Bartlett's test of sphericity both suggest that the set of variables are at least adequately related for factor analysis meaning that seven clear independent patterns were identified . The analysis yielded seven components explaining a total of 59,41% of the variance for the entire set of variables . These preliminary results were sent to a group of 11 experts and each of them was asked to label each component. The insights from this group of experts defined the labelling of each component. The first component was labelled "Leadership Skills" due to the high loadings by the following items: Motivating workers; being coordinative with all departments; being able to delegate; being fair and objective; being able to moderate difficulties; being able to congratulate; having good management skills . The second component was labelled "Personality Traits". This factor was labelled as such due to the high loadings by the following factors: being innovative; being altruistic; being ambitious; being able to create social network. The remaining 5 factors were labelled "Communicational Skills", "Quality Oriented Skills", "Adaptability Skills", "Analytical Skills" and "Technological Skills" . Cumul. % In order to ensure the results, the scree plot was used, showing that from the seventh component onwards the line is almost flat suggesting that each successive component is accounting for not relevant amounts of the total variance . So, the scree plot depicted in Figure 2 backs up the data from Table V, i.e., it is possible to extract seven components based on the available data. The Cronbach alpha is presented in Table VI. It is possible to highlight that solely the "Quality Oriented Skills" component presents a poor Cronbach Alpha score. At this stage this poor result does not preclude the validity of the component and, later on, throughout the development of the measurement model and the structured equation model, it will be assessed if the items comprising this component will be retained or not taking into account the final fit indexes. The rotated component matrix clarifies the number of components and the variables clustered within. It is possible to distinguish seven components and nearly all the variables load highly in solely one factor. However, it is possible to observe some slight cross loadings in the following variables: "Coordinative", "Ambitious", "ChangRoles", "AnalyReas", "StatTools" and "Ind4.0". --- Leadership MotWorkers 0,542 0,353 0,053 0,433 0,051 -0,097 0,027 Coordinative 0,426 0,056 0,400 0,324 0,157 0,036 0,054 AbleDelegate 0,698 0,150 0,179 0,135 -0,001 0,178 0,113 FairObjective 0,644 -0,064 0,239 0,225 0,155 0,103 0,138 ModDiff 0,628 0,003 0,179 0,078 0,337 -0,068 0,192 Congratulate 0,658 0,289 0,197 -0,045-0,016 0,215 -0,074 GoodManSkills 0,505 0,283 -0,280 0,293 0,239 0,013 0,083 Personalit y Traits Innovative 0,218 0,594 0,181 0,093 0,236 0,207 -0,048 Altruistic 0,090 0,754 0,162 -0,028-0,015 0,106 0,170 Ambitious 0,137 0,470 0,356 0,270 0,097 -0,150 0,202 SocNetwork 0,096 0,667 0,161 0,145 0,198 -0,025 0,284 Communi cational EmoInt 0,208 0,209 0,666 -0,035 0,104 0,223 -0,024 Persuasive 0,117 0,350 0,589 0,126 0,198 0,180 -0,111 WorkTeams 0,223 0,143 0,642 0,195 0,100 -0,039 0,203 GoodComm 0,119 0,119 0,537 0,500 0,166 -0,008 0,039 Qual. Orien. QualTools 0,097 -0,096 0,155 0,631 -0,007 0,236 0,267 Instructive 0,217 0,174 0,192 0,592 0,148 0,110 0,083 CustFocused 0,222 0,158 -0,007 0,583 0,093 0,288 -0,303 Adapt ability ComProblSol 0,077 0,181 0,126 0,095 0,781 0,045 0,144 ChangRoles 0,401 0,227 0,286 0,048 0,560 -0,092 0,039 CogFlex 0,344 0,001 0,343 0,034 0,578 0,205 -0,085 Analyt ical AbsThought 0,193 0,032 0,226 0,118 -0,044 0,715 0,041 AnalyReas -0,023 0,101 -0,056 0,277 0,526 0,556 0,192 StatTools 0,007 0,090 0,018 0,333 0,429 0,562 0,249 Techn ologica l Troublesh 0,068 0,142 0,060 0,280 0,061 -0,016 0,630 ITTools 0,158 0,158 0,014 -0,147 0,118 0,189 0,720 Ind4.0 0,122 0,350 -0,013 0,104 0,069 0,428 0,477 Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a. Rotation converged in 28 iterations. --- IV. CONCLUSION This paper identifies seven significant dimensions that comprise the profile of the quality leader of the 21 st century. It provides a better understanding of the relationships established between these dimensions by discovering the most significant influencing factors. The research provides the profile explained in 7 different groups, each one containing specific skills from the 27 identified. This paper pointed out the most appreciated set of skills to forthcoming quality leaders. This fact entails that professionals aiming at the quality leader role in the future may now tailor and develop their skills based on the information provided by this study. Moreover, companies may use the results to specify and optimize which dimensions should be developed in their human resources in order to achieve the highest benefits and outputs.
Currently, due to the globalization phenomenon and technological evolution, we are facing a new challenging set of paradigms encompassing social, industrial, financial and cultural issues. Hence, it is a difficult task to anticipate the new demands of the market concerning the most appreciated skills in the forthcoming workforce. This paper intends to report the sets of skills that comprise the desirable profile of the quality professional in the 21 st century. To meet this purpose, a worldwide online survey was held online throughout the first quarter of 2018, assessing (adopting a 5-point Likert scale) the importance of 27 skills identified in relevant literature. A total of 319 valid answers, originated from 61 different countries, were collected and summarized through descriptive statistics. The results suggest that seven sets of skills (appreciated skills to forthcoming quality professionals) comprise the profile of the quality professional in the 21 st century. Thus, professionals aiming the quality leader role may now tailor their skills based on the information provided in this paper. In addition, companies can use these results to specify the dimensions that their human resources should develop.
Introduction --- Background The use of technology to support older adults against feelings of loneliness and social isolation provides novel opportunities that have grown in the field of aging, as technology demonstrates that information and communications technology use and training [1] and robotics conflate in the provision of programs and activities to facilitate social connectedness. Social isolation and loneliness in older adults have been extensively researched. Many studies showed that the prevalence of these problems increases with age. For example, the prevalence of loneliness among young adults, early to middle-aged adults, and late to middle-aged older adults are 39.7%, 43.3%, and 48.2%, respectively [2]. The current global population of people aged ≥60 years is expected to triple to 2 billion by 2050 [3]. The number of people aged >50 years experiencing loneliness is expected to reach 2 million by 2025-2026, a 49% increase in 10 years [1]. Loneliness and social isolation are different concepts but are interlinked and can be considered the constructs of social disconnectedness [4]. Social isolation is objectively defined as the deprivation of relationships and social interactions, whereas loneliness is a subjective sense of not meeting one's social needs [5]. Socially disconnected individuals are vulnerable to social isolation and loneliness because they have small social networks and low participation rates in social activities [6]. Fafchamps and Shilpi [7] defined social isolation as "deprivation of social connectedness and an inadequate quality and quantity of social relations at different levels of interactions " [6]. Socially disconnected older adults are also vulnerable to a range of health disorders, including infection [8], high blood pressure [9], impaired cognitive function [10], depression [11], stress associated elevation of hypothalamic-pituitary-adrenocortical activity [12], cardiovascular disease [13], diminished immunity [14], and mortality [15]. In addition, loneliness elevates the risk of dementia [16] and accelerates the progression of Alzheimer disease [10]. As the population proportion of older adults increases, negative health outcomes are expected to rise along with social isolation, and loneliness is likely to increase along with negative health outcomes [17]. Rapidly deployable technologies, along with socioeconomic changes that have reduced the cost of technology, have increased the accessibility of technological devices, creating new opportunities for older adults [18]. Internet-based technology interventions for social disconnectedness have grown over the past decade [19]. Digital communication technologies can improve the lives of older adults by facilitating their social relationships. Technologies such as email, social networking sites , videoconferencing, and mobile instant messaging apps have been shown to improve self-rated health and lower the incidence of loneliness, chronic illnesses, and depressive symptoms in older adults [20]. They also supplement the social benefits of physical interactions by reinforcing existing connections or providing routes to new connections, further reducing loneliness levels. Frequent users of technology and the internet can also access health information and social support for psychosocial problems. However, many studies on technology intervention ignore confounding factors, such as age, gender, living arrangements, economic status, education level, cognitive status, and daily living activities [21,22], which may influence the effectiveness of the intervention and the robustness of the findings. The small number of high-quality studies in this arena limits the generalizability of the results. Several reviews have summarized works on technology interventions for older adults experiencing loneliness [23,24], but their value is diminished by the plethora of unclear evidence, heterogeneity of both populations, measures and methodologies, diverse outcomes, scattered focus, and broad topics. As the existing reviews are heterogeneous in content, lacking the investigation of outcome measures used and discussions on causation, they cannot reach generalizable conclusions. For a standardized systematic report on these reviews, we must assess the quality of the reviews and find common observations and derivable themes. An umbrella review method can provide a focus for areas where there are competing interventions and amalgamate evidence from multiple quantitative and qualitative reviews [25]. To our knowledge, an umbrella review exploring the types and effectiveness of intervention technologies for social connectedness has not been published. --- Aims To bridge this gap in the literature, we aimed to explore the findings and limits of current knowledge on the impact of technology interventions on social disconnectedness in older adults. We also emphasize areas requiring further research. In a comprehensive umbrella review, we synthesized the various categories and types of the used technology interventions, discussed their effectiveness and limitations, and finally explored their potential and need for further research. Finally, we amalgamated all the evidence from the umbrella review and used Grading of Recommendations, Assessment, Development, and Evaluations to make recommendations for interventions targeting social connectedness. This review attempts to answer the following questions: --- Methods This umbrella review followed the standardized procedures [12,26,27] of systematic reviews. The protocol followed the PRISMA systematic review protocol guidelines [28] and the Joanna Briggs Institute methodology for umbrella reviews [12]. --- Search Strategy The search strategy involved controlled vocabulary searching; phrase searching; and applying Boolean logic, limits, and filters. A comprehensive systematic search of 4 databases was conducted between February 2020 and March 2022. The reference lists were also examined for additional reviews. The following search terms were used: "ageing," "aging," "older adults," "reviews," "2000-22," and synonyms for "social isolation and loneliness," "social connectedness," and "technology interventions." As an example, Textbox 1 shows the search terms and search strategy applied to the PubMed database. Search terms can be found in Multimedia Appendix 1. --- Inclusion and Exclusion Criteria The inclusion criteria were formulated using the population, intervention, comparison or context, outcomes, and study schema [29,30]. Table 1 describes the inclusion criteria under which the studies were selected for this review. --- Selection Process The abstracts and titles of all potentially relevant articles were screened. Full texts were then evaluated, and duplicates were removed. Uncertainties were discussed among the research team members to reach a consensus. Relevant data of the included articles were summarized in tables and checked for accuracy by a second investigator . --- Analysis The data analysis was based on a thematic synthesis with an inductive, iterative process consisting of 3 main stages: free line-by-line review of the results, synthesis tables, and discussion sections of the included papers; organization of themes into related areas; and the identification, development, and refinement of detailed descriptions of factors that impacted the effectiveness of technology interventions [31]. All measures used were specified, and the statistical results were summarized. The technology types were listed along with their effectiveness, and the authors' conclusions were also summarized. --- Quality Assessment The methodological qualities of the reviews were assessed using the Revised Assessment of Multiple Systematic Reviews [32] quality rating tool for reviews. The 11-item R-AMSTAR includes 11 questions whose scores are summed to give the overall quality score of a systematic review. The R-AMSTAR tool provides a quantifiable assessment of systematic reviews and a measurement of their methodological quality. The maximum possible score is 44.15. Any review scoring <22 was excluded as it lacked 1 or more critical R-AMSTAR definitions. For example, the review might not assess the scientific quality of the studies or might apply a poor method for combining study findings [53]. --- Grading of Evidence The overall certainty of the evidence was evaluated using the GRADE method, which analyzes the risk of bias and assesses the quality of the included evidence, which we used to make recommendations [54]. Initially, we categorized the evidence based on the inclusion or exclusion of randomized controlled trials , followed by the inclusion or exclusion of observational studies. We then considered whether the studies had serious limitations or important inconsistencies in the results, or whether uncertainty about the validity of the evidence was warranted . Limitations in study quality found in the R-AMSTAR appraisal, important inconsistency of results, or uncertainty about the directness of the evidence lowered the grade of evidence. For instance, if all available studies have serious limitations, the grade will drop by a level, and if all studies have very serious limitations, the grade will drop by 2 levels. The quality of evidence is also reduced by imprecise or sparse data and an imprecise understanding of social concepts. --- Results --- Overview The article elimination process is summarized as a flowchart in Figure 1. The initial search extracted 972 publications. Further, 91 articles were identified after checking the reference lists. After excluding duplicates and irrelevant publications, articles were screened using the population, intervention, comparison or context, outcomes, and study schema inclusion criteria . The commonest reasons for exclusion were interventions targeted at specific mental and physical illnesses and interventions not matching the prespecified definition . A total of 90 full-text reviews were further passed through a 3-step screening process for eligibility and inclusion in the qualitative synthesis of this review. Finally, 24 reviews based on technology interventions were eligible for the synthesis. --- Quality Assessment Among the 24 selected articles, 3 articles with R-AMSTAR scores <22 were excluded because they failed a priori systematic review processes . The 21 remaining reviews were of moderate quality, with none meeting all of the R-AMSTAR criteria. --- Data Extraction Data from the 21 reviews were extracted using a piloted, standardized data extraction form that captures and summarizes findings. As both technology interventions and extracted outcome data were heterogeneous, they were deemed inappropriate for a quantitative synthesis using meta-analytic techniques. Instead, a narrative synthesis summarizing the effectiveness of interventions was implemented. Under the methodological considerations of umbrella reviews, the results were reported descriptively in tabular form along with their associated characteristics . Multimedia Appendix 3 provides details of the 21 reviews in this study. --- Study Characteristics The 21 selected reviews included 16 systematic reviews , 2 integrative reviews , 2 scoping reviews , and 2 meta-analyses . Most of the reviews covered the beneficial impact of technologies on loneliness, whereas others focused on social isolation, connectedness, and quality of life. General ICT was the most commonly applied intervention technology. The publication period was from 2005 to 2022, but 19 of the selected reviews were published within the last 7 years. Of the 21 ). The reviews reported mixed results. Positive effects of ICT on loneliness were the most commonly reported, followed by the positive impacts of ICT on social isolation or connectedness. Reviewing data from the underlying primary studies in the reviews, the most effective intervention mode for social connectedness was identified as general ICT, followed by videoconferencing and robotics . --- Results From Systematic Reviews With Meta-analyses Among the 21 selected reviews, only Choi et al [20] and Bornemann [33] performed meta-analyses of homogenous data . Choi et al [20] reported a significant pooled decrease in loneliness after implementing technology interventions . However, Bornemann [33] concluded a nonsignificant decrease in loneliness after reviewing 5 out of 7 studies included in the review by Choi et al [20] -that is, the same 5 studies yielded different pooled meta-analysis results in the 2 reviews. This divergence indicates potential biases in the analytic approaches; for instance, Bornemann [33] excluded some studies included in Choi et al [20], and some of their findings were inconsistent with the narrative conclusions of their included studies. Bornemann [33] questioned the validity of some of the data acquired by Choi et al [20]. Although this review does not cross-examine these findings, we clarified that a study included in Choi et al [20] should have been excluded, as it was not an ICT intervention study. We decided that although the statistical conclusions of Bornemenn [33] were correct, Choi et al [20] raised some valid points. Multimedia Appendix 4 gives the levels of certainty in the quality assessment of outcomes developed within the GRADE framework. Low-quality assessments in different categories are mainly attributable to the elements of the study design, poor study quality, inconsistency, and indirectness. --- Categories of Technology Interventions Of the 21 studied reviews, 14 dealt with general ICT , 4 with videoconferencing, 3 with computer and internet training, 2 with telecare, 2 with robotics, 2 with SNS, 3 with gaming, and 1 with 3D augmented reality . Among the primary studies, general ICTs were the most commonly adopted interventions , followed by computer training, SNS, telecare, and robotics . Although some of these categories overlapped, we differentiated them as they were distinguished in the original reviews. --- Outcome Measures Used All the reviews reported large numbers and diverse outcome measures of primary studies. Besides constructs of social disconnectedness , many studies assessed factors such as quality of life, self-esteem, stress, and depression. Although not directly related to social disconnectedness, these factors may affect or be affected by social disconnectedness and may be useful to include outcome measures alongside social connectedness. A minority of the reviews also reported outcome measures of empowerment. When analyzing these quantitative primary studies, the reviews commonly applied validated tools, such as the University of California Los Angeles Loneliness Scale and the De Jong Gierveld Scale [4]. The UCLA was the most tested dependent variable. Among various other measures were the Social Support Scale by Schuster and Hunter [34], Social and Emotional Loneliness Scale [55], and Multidimensional Scale of Perceived Social Support by Zimet et al [56]. Social connectedness was sometimes measured using the holistic Social Connectedness Scale by Lee and Robin [57], which is regarded as a comparatively reliable measure. The definitions and uses of outcome measures differed across the reviews. A total of 62 outcome indicators of social connectedness were used in the primary studies. Most reviews did not report on the lack of intervention effects ; moreover, the primary studies adopted a mixture of validated and nonvalidated outcome measures, making such reporting difficult. Consequently, they could not conclude whether the primary studies had validatable statistically significant outcomes. --- Social Concepts Used The social concepts used for determining outcomes varied in range and diversity. In many reviews, the source papers did not define social participation or social isolation but instead evaluated these factors as general or neighboring concepts [19,[35][36][37]. Loneliness was evaluated more consistently than social participation and social isolation but was sometimes incorrectly interchanged with social isolation. Most studies assessed loneliness on standardized scales, notably the UCLA Loneliness Scale [35,36,38,39]. A few of the reviews highlighted that inconsistency and lack of specific definitions hindered the grouping and evaluation of their chosen papers [19,37,39]. Morris et al [39] described social connectivity as a multidimensional concept that is difficult to define, conceptualize, and measure. They elaborated that outcome measures, such as the UCLA Loneliness Scale and Perceived Social Support Scale, identify only single aspects of social connectedness. Cattan et al [37] also noted a complex association among social isolation, loneliness, and living alone, which was difficult to describe in their reviewed studies. Rarely among the review studies, Cattan et al [37] attempted to distinguish living alone from social disconnectedness and suggested that living alone be measured independently as a concept of physical isolation. Ibarra et al [40] correctly defined loneliness as "a subjective measure referring to the 'unpleasant' lack of and quality of social relationships." By contrast, isolation is an objective measure referring to few or no social relationships, although their study clarified the difference between social isolation and loneliness. Gardiner et al [36] and Williams et al [38] adopted the less frequently used concept of social facilitation for creating mechanisms through which older adults can interact with peers. From an alternative perspective, they measured the facilitation of social connections. The article by Williams et al [38] was especially relevant, as it examined interventions during the COVID-19 pandemic. Facilitation may lead to effective interventions that reduce social isolation and loneliness, without violating COVID-19 shielding and social distancing measures. In conclusion, different definitions and measurements of loneliness, social isolation, and social connectedness have led to diverse findings and wide variations across and within disciplines, defying a coherent picture of the research. Although some of the more recent studies and reviews have addressed this heterogeneity, reliable and succinct findings will remain elusive without further investigations. --- Group Interventions Versus One-to-One Many interventions implemented in the individual papers of the reviews were broadly divisible into group and one-to-one interventions. In general, group interventions were more frequently implemented than one-to-one interventions, although both types were effective [24,37,40,41]. Cattan et al [37], who reviewed 3 computer group interventions, reported that group interventions with educational and social activities are particularly effective. The imbalance between the group and one-to-one interventions impairs comparisons between the 2 types and conclusions regarding their comparative successes. Nevertheless, some of the reviews pointed out the possible advantages and limitations of these intervention approaches. Poscia et al [41] noted that group interventions might beneficially create a sense of security and belonging, although the real effect of the intervention might be obscured by interactions among the group members. Individual interventions might create deeper, more personal bonds and boost confidence in social engagements. Ibarra et al [40] further observed that one-to-one interactions limited participants' contact with family, friends, and acquaintances, whereas group interventions encouraged them to interact with new people and potentially expand their networks, thereby increasing their number of new social connections. Overall, group interventions appear to improve social disconnectedness, but the insufficient number of one-to-one interventions prevents an objective comparison and firm conclusions of the best interaction type. However, the GRADE assessment of the quality of evidence suggested a very low advantage of group interventions over one-to-one interventions . --- Effectiveness of Technology Interventions as an Overarching Category Technology interventions that enhance social connectedness include general ICT, video games, robotics, and the Personal Reminder Information Social Management system . Less conclusive evidence exists for the beneficial effects of SNS [20,24,25,37,41,42]. Overall, technologies appear to positively affect loneliness, social isolation, and other psychosocial aspects of older adults' lives. Khosravi et al [42] examined 8 technology types and found that most technologies, in some formats, can increase social connectedness in older adults. When technologies were intended to strengthen existing connections, their positive impacts on loneliness and social isolation were more consistent [24,40,41]. Ibarra et al [40] found that technologies are fundamental to long-distance interaction and are thereby necessary for expanding social networks, improving existing ties, and increasing social connectedness. However, they noted that how technology is availed, the limitations and opportunities of technology, and their effects on the success of the intervention are all unclear. Some reviews [20,35,43,44] included a psychosocial outcome of interest, such as social isolation, life satisfaction, loneliness, or depression. It was found that interventions significantly reduce loneliness but are ineffective against depression [35,43,45]. Damant et al [45] found a significant correlation between internet use and depression, suggesting that although the literature reports a significant correlation between loneliness and depression, technology can exert divergent impacts on these 2 psychosocial variables. However, Khosravi and Ghapanchi [43] reported that technology interventions can potentially reduce depression through engagement in social interaction, hinting that social isolation impacts more strongly on depression than does technology. Choi and Lee [58] presented a detailed statistical evaluation of 8 RCT studies investigating the impacts of various technology interventions on loneliness. They found a statistically significant decrease in loneliness in the intervention group compared with the control and usual care groups . However, there were no statistically significant differences in loneliness among the members of the intervention groups before and after the intervention . Individual reviews reported less conclusive outcomes of the overall technology use. The results of Morris et al [39] ranged from positive to no impact on loneliness, and Damant et al [45] noted a negative association between "social involvement and participation" and older adults' use of technology, thereby indicating that the more socially involved people were, the less they tended to use technology. They found that high internet use was associated with high levels of loneliness. Interestingly, Chen and Schulz [35] found a positive effect of technology on social connectedness, this impact usually diminished in studies spanning >6 months. The time frame of studies investigating the effectiveness of technology was also a recurrent theme in other studies. The diminished effect is potentially linked to fatigue from using the intervention or inconsistency in the study approach over time. Specifically, the following technology interventions appear to reduce social isolation but lack rigorous statistical support for a positive effect: robotics, telecare, and SNS [34,36,42,45]. Overall, 86% of reviews examined the impact of technology intervention on loneliness. The reviews covered 324 primary studies involving 66,565 participants. Of the 18 reviews, 15 reported a positive effect of technology on loneliness; the remaining 3 studies found a 0 or negative effect. From the reviews, it can be concluded that technology interventions exert an overall positive influence on social isolation and loneliness , but their effectiveness depends on the design of the study. Longer training times, shorter study durations, and facilitation of existing relationships tended to increase the effectiveness of the intervention. The quality of evidence supporting the effectiveness of technology interventions on social connectedness was moderate to low. --- General ICT This section explores the findings of general ICT interventions reported in the reviews. General ICT is an umbrella term for generic technology devices, services, applications, and internet platforms [59]. ICT includes internet-based networks, mobile phones, computers, tablets, and any software requiring an Balki et al JMIR AGING --- XSL • FO RenderX internet connection. Interventions in this category include interactions via internet use , emails, video chats and conferencing, SNS, virtual spaces, classrooms, and messaging services. Some reviews mentioned systems tailored for older adults, such as the customized touch screen video-chat system described by Ibarra et al [40]. Computers with a mouse and keyboard as input devices were preferred, closely followed by tablets and mobile phones . Other interventions used customized television sets and touch screen computers. Khosravi et al [42] and Khosravi and Ghapanchi [43] reported studies on Personal Reminder Information Social Management . In most of the reviews, general ICTs were regarded as a single category, although videoconferencing and SNS were often placed in separate subcategories. Many of the reviewed studies found that ICT interventions not only significantly reduce loneliness but also exert a positive impact on other aspects of social isolation, providing social support and connectedness, communication with family and friends, and ICT-accessible information sources [19,20,35,42,43]. Some reviews hinted that ICT facilitates the acquisition of information through the internet, either through interactions with other people or through finding relevant information on the web, which helps reduce loneliness [35,38,60]. Indeed, Morris et al [39] found that social connectedness especially benefits from technologies with web-based programs incorporating items such as health information, support groups, chat rooms, or discussion boards. Damant et al [45] alone reported on studies with less promising results. In a study, only a small number of older adults maintained contact with their families via the internet. These participants were reluctant users with the sole purpose of keeping in touch with their grandchildren. In another study, they found no significant correlation between internet or email use and contact with family and other people. Both studies revealed no significant correlation between computer use or training and loneliness. Some of the studies reviewed by Damant et al [45] reported exacerbated loneliness through ICT use. It appears that ICT can positively reinforce existing social networks but has a limited impact on building new ones. Only 2 reviews provided a homogenous meta-analysis. Both reviews reported positive impacts of general ICTs on social disconnectedness. In total, these reviews included 119 primary studies: 86 reporting a positive impact on social isolation or loneliness and 33 reporting unclear results or no impact. The studies agreed that increasing the frequency of general ICT use enhances social connectedness, improving the ease with which older adults can interact and maintain contact with others, thus reinforcing social connections with friends and family. The evidence that frequent ICT use facilitates the creation of new relationships or contacts is much weaker, further supporting, in part, the conclusions of Damant et al [45]. Together, these results suggest that general ICT can facilitate established connections and might supplement or replace older communication methods. Its role in establishing new connections is uncertain. Our results suggest that when considering ICT interventions , it is important to distinguish between their ability to maintain relationships, potential ability to deepen relationships, and inability to help create new relationships. The GRADE strength of the ICT category, although only moderate, was the highest among the categories because a large number of primary studies, including RCTs, were reviewed in this category, and there was consensus and clarity on the outcome measures. --- Social Networking Sites Although SNS is a subcategory of ICT, it warrants its own heading because 33% of reviews discussed separate finding on SNS. The reviews gave mixed results. Whereas some studies supported the use of SNS in reducing loneliness, a sizable number showed no impact or even an increase in loneliness after SNS use [19,42,46]. Both Chen and Schultz [35] and Wiwatkunupakarn et al [46], who reviewed high-quality RCT studies on the use of SNS, reported inconclusive impacts of SNS on loneliness. They found some support for sites such as Facebook, which provides games that can be played with others over a network, thus fostering social interaction and alleviating loneliness. The mixed findings in these reviews might be explained as follows: although older adults embraced the use of SNS to support their social relationships and help them overcome loneliness, they did not regard these sites as a replacement for face-to-face contact. Participants preferred to use SNS for searching for and disseminating information rather than socializing. Morris et al [39] reported positive effects of smart technologies similar to SNS, especially when they incorporated health information, support groups, chat rooms, or discussion boards. Their findings support a role of SNS in knowledge-seeking and support-acquisition scenarios, with consequent impact on loneliness. These findings may partly depend on the type of SNS, as different types of SNS support different features. For example, Facebook may promote socialization more effectively than YouTube, whereas YouTube may better facilitate explicit knowledge acquisition and information transfer than Facebook. Ibarra et al [40] discovered that participants favored off-the-shelf solutions, such as Facebook and About-My-Age . Users of these sites commented on their decreased loneliness and easy control of the sites. The sheer volume of users on these platforms might assist older adults in finding relevant information, including information on how to use the platforms, thus creating a positive feedback loop. On the downside, SNS use raises several concerns: privacy, lack of perceived usefulness, and possibly demographic factors [19,47]. Newman et al [47] noted an interesting connection between educational attainment and SNS use: SNS users tended to be White, employed, educated, and married. They also found attitude differences toward technology use among sociodemographic groups based on gender and age . Overall, 61 primary studies examining SNS were found in the reviews: 31 reporting positive impacts of SNS on social isolation and loneliness and 30 reporting unclear or no impacts of SNS. Therefore, the effectiveness of SNS is inconclusive. The results suggest that older users can obtain support, acquire knowledge, and maintain their existing relationships through SNS. In terms of combating social disconnectedness and establishing new relationships, SNSs are less effective and can be detrimental at times. However, the effectiveness of SNS in developing new relationships, fostering and maintaining existing ones, and acquiring knowledge and support has not been explored in depth, and the idiosyncrasies of SNSs must be unraveled in further research. The strength of evidence of the reviews in this category is low because of indirectness, missing information, and publication bias. --- Videoconferencing Overall, videoconferencing appeared to exert a positive impact on loneliness and social connectedness. The visual aspect of this intervention seemed especially appealing to older adults [24,[34][35][36]38,40,44,49]. In total, 3 reviews reported on videoconferencing between family members and their established contacts. All reviews described a statistically significant reduction in loneliness [39,41,45]; however, videoconferencing was more effective in facilitating established connections than in building new ones. Moreover, videoconferencing showed a weak impact on information gathering. For instance, Chen and Schultz [35] reported that videoconferencing did not significantly provide informational support or instrumental support , which may improve social connectedness [35]. Ibarra et al [40] mentioned 1 study in which Skype used for educational purposes did not change participants' loneliness levels and another study in which Skype combined with computer training better reduced loneliness levels than did Skype alone. These reviews suggest that videoconferencing is effective for maintaining established connections, such as those with family members, but is less effective for other purposes, such as education and information seeking, which may indirectly impact social connectedness. Gardiner et al [36] and Ibarra et al [40] mentioned the importance of appropriate hardware and design in videoconferencing. They reported that technical, financial, and design issues are potential barriers to the wider uptake of this technology. When used in health support, videoconferencing yields mixed results. The intervention often decreases the loneliness and social isolation of residents in care and nursing homes, but a few studies have found no difference from the baseline [34,35,43]. More clearly, participants in these settings benefit from videoconferencing contact between family and friends, with beneficial effects on loneliness. Interestingly, Husebø and Storm [48] found that virtual visits by clinicians reduced the social isolation of residents in care homes, suggesting that videoconferencing can enhance the perception of independence by providing easy access to services. In general, videoconferencing appears to reduce loneliness in residential, nursing, and clinical care settings, although the specific aspects of the intervention that ensure its success have not been elucidated. Overall, 14 primary studies in this subcategory were found in the reviews. Of these studies, 11 reported a positive impact on social isolation or loneliness. Owing to reviews such as by Schuster and Hunter [34], with clear outcomes and the inclusion of RCTs, the GRADE strength of evidence in this subcategory was moderate to low. The use of standardized outcome measures would have strengthened the GRADE rating. --- Mobile and Instant Messaging Among the studied reviews, only Ibarra et al [40] alone described studies on MIMs such as WhatsApp and Line . In 1 study, WhatsApp was used more extensively than email by relatives; however, a lack of responses can increase the perception of loneliness. Ibarra et al [40] hinted that as WhatsApp and similar applications are easy to use and allow the sharing of pictures, they exert a positive impact on social disconnectedness. However, the evidence was insufficient for concluding the impact of MIMs on social connectedness and loneliness; moreover, the few primary studies suggest that MIM explorations are only emerging at this stage. Given the lack of information found in the reviews, the GRADE strength of the evidence in this category was very low. --- Computer and Internet Training In total, 13 reviews evaluated the impact of computer and internet training on various guises. All reviews found a positive impact of these interventions on social connectedness and loneliness [20,24,36,39,41,43,45]. In 4 of these reviews, loneliness reduction was found by the authors to be statistically significant [39,41,45]. However, all these studies investigated group training, suggesting that positive impacts were contributed by interaction with others in the group. Indeed, Damant et al [45] found a study in which group training increased the perceived support of friends and another study in which loneliness levels were reduced when email and web-based forums formed part of the training regime. Mixed results were also obtained for this category. Baker et al [19] reviewed 2 studies on ICT training, 1 finding no correlation between the training and social connection and the other concluding that ICT training can enhance social networks. Although the authors did not elaborate on this discrepancy, the very different time frames of the 2 studies may have affected the results. Indeed, whenever mixed results were found, the training time appeared to be a contributing factor, with shorter training times more likely to yield inconclusive results [24,36,42]. Furthermore, Choi and Lee [58] reported that in most studies, older adults enjoyed using technology and significantly increased their frequency of use, suggesting that minimal training was required. Unusually, among the reviews, Williams et al [38] found that overall computer training produced no effect on social isolation. Overall, ICT training showed a higher ability to reduce loneliness in longer-duration studies than in shorter-duration studies. As some reviews did not differentiate between the impacts of training and subsequent use, any assumptions would be dubious. Morris et al [39] noted a combined result, in which interactive web-based programs, discussion forums, and training mainly enhanced social connectedness; only 1 study reported inconclusive results. The effect of training was often confounded with the effect of the mechanism , making it hard to differentiate and properly evaluate whether computer training on its own was having an effect. The GRADE strength of the evidence was low, emphasizing the need for assessing the full potential of computer training in social connectedness. --- Telecare Telecare was among the less frequent interventions in the review studies, but when included, it appeared to reduce social isolation and loneliness [42,48,49]. Husebø and Storm [48] comprehensively investigated telecare services for older adults. After reviewing 12 primary studies covering this area, they found that virtual visits by clinicians can reduce social isolation and loneliness in older adults compared with no contact. Other benefits included self-management of medication and self-care, which can postpone admission to long-term care or substantial in-home care. In all areas, telecare both directly and indirectly affected participants' perceived social isolation and loneliness. In 4 of the studies, older adults interacted with others experiencing similar issues. These interactions were highly valued and enabled the development of deeply empathetic connections [59]. By contrast, Damant et al [45] found no conclusive evidence of enhanced social connectedness among older adults using videoconferencing . Although none of the authors described the key features of successful telecare interventions, an emergent theme from successful primary studies was a high frequency of contacts. Interventions designed for regular and frequent contact were apparently more successful than interventions delivered on demand . Overall, 34 primary studies in the analyzed reviews covered this category. The impact of telecare on social connectedness was inconclusive, and uncertainty was further increased by the poor reporting of the results. Consequently, the GRADE strength of evidence in this area was very low. --- Robotics Robotics is a cutting-edge field and was mentioned in only 6 reviews. Some studies found that a pet robot provides the same level of benefit as animal-assisted therapy, which is known to reduce loneliness and social isolation [35,36,42]. Ibarra et al [40] mentioned that older adults feel embarrassed when conversing with a virtual pet, although this discomfort might have been exacerbated by audio problems and latency in messages. Choi and Lee [58] provided an excellent systematic review covering animal robots, humanoid robots, and mobile robots. They identified a notable development trend in robotic interventions from simpler animal robots to complex, multifaceted web-based social platforms that offer emotional support and promote social participation, cognition, physical activity, nutrition, and sleep. In most of their examined studies, robotic interventions decreased loneliness and social isolation. Although no other study has looked at the impact of virtual pets on loneliness, this seems to be a promising area that needs further research, with the potential of virtual or robotic pets offering a distinct advantage of social affordance compared with animal-assisted interventions. Khosravi et al [42] and Antunes et al [44] examined conversational agents designed for companionship and video communication, enabling older adults to connect with family members and friends and offering "talk therapy." Overall, these agents improved social interaction and reduced the loneliness of participants. With the ongoing development of pseudo-artificial intelligence technology and the advent of voice-assisted agents, such as Alexa and Siri, conversational agents are promising solutions and need to be further explored. Khosravi and Ghapanchi [43] concluded that robotic technologies increase the perception of being socially connected and hence, exert a positive impact on social and emotional well-being. However, the perception of not being socially isolated differs from the actual reduction in social isolation, which depends on real person connections. On the adapted effectiveness scale, robotic technologies scored 1.8 out of 3.0. Although these reviews indicate that social connectedness can be increased through robotics, this category is still new, and further studies on AI conversational agents and other robotic interventions are required. Therefore, the GRADE strength of evidence in this category is moderate to low. --- Gaming According to Khosravi et al [42], Video gaming devices such as Wii, which capture natural physical activities, achieve a greater reduction in loneliness and better social interaction than typical video games. Chen and Schultz [35] and Williams et al [38] found that Wii strengthens social interaction and reduces loneliness; however, web-based gaming was outside the scope of these studies. Choi and Lee [58] reported 3 studies in which video games and exercises were combined into an exercise game, enabling communication with others. This game reportedly reduces loneliness during exercise. However, the GRADE confidence in the effect of gaming is very low because solid evidence is lacking. --- 3D and AR Similar to robotics, 3D environments have been newly introduced as a loneliness-reduction intervention technique and are rarely reported. Khosravi et al [42] reported that most studies on 3D environments included a small number of participants, suggesting a need for further research. Although the underlying studies reported a positive impact of 3D environments on loneliness, the weak methodology and reporting of findings cast doubt on their validity. This category has been underexplored and requires further research. Current developments in 3D worlds, Facebook's foray into Metaverse, and AR developments by prominent companies such as Google and Microsoft should accelerate the design of 3D interventions for older adults. Owing to a lack of evidence, the GRADE confidence in the effects of 3D environments and AR is very low. --- Usability Impact on Effectiveness of Technology There were few reviews that examined the usability of technology and its impact on the effectiveness of interventions. Some reviews identified a link between usability and acceptance of technology; more accessible devices were distinctly more likely to be embraced by users than less accessible devices [19,40,48,58]. Even when usability was not a formal outcome, the studies observed participants' initial feelings of uncertainty and fear of using technology. These trepidations were overcome with time, familiarity, and sufficient training [19,40]. Ibarra et al [40] reported that touch screen computers were especially effective in reducing loneliness and social isolation, highlighting the importance of an easily accessible system or interface. Husebø and Storm [48] noted that when introducing technology to older adults, a usable and simple design that considers the likely interactions of older adults with technology is essential. Choi and Lee [58] identified 6 studies in which the use of and attachment to ICT interventions increased over time along with the average density of social networks. However, systematic reviews typically neglect the human-computer interaction components of intervention technology. Moreover, standardized measures of usability for intervention studies have not been defined [19,40]. The use and adoption of technology by older adults largely depends on the learning ability of the individual and the perceived difficulty of use. To ensure that technology can effectively reduce loneliness in older adults, these potential barriers should be examined appropriately. Overall, the reported studies showed that whether technology can reduce loneliness depends on its usability. An intervention perceived as difficult to use by older adults cannot be effective. This aspect must be further investigated to improve the success of technology interventions. Owing to a lack of evidence, the GRADE confidence in the effect of usability on the success of intervention technologies is very low. --- Summary Recommendations On the basis of the results, Table 4 summarizes the key recommendations extracted for technology interventions targeting social isolation, connectedness, and loneliness. We have also summarized the key recommendations for study design targeting social isolation, connectedness, and loneliness in Table 5. • Simple technology interventions can be more successful than complex ones. Usability is a potentially important outcome. --- • ICT is not recommended for increasing either the quantity or quality of communications or helping to establish new relationships. It is recommended for maintaining and enhancing existing relationships and access to services . --- Low SNS b • SNS is not recommended as an intervention for loneliness and isolation as SNS use has often been shown to worsen loneliness. --- • SNS is useful in knowledge and support acquisition scenarios, which can themselves reduce loneliness. Research shows that SNSs are generally more successful in these scenarios than in making new connections. --- • Privacy is an important concern among older adults and needs to be considered when designing an intervention. --- • Usability is potentially a very important theme and needs to be factored into the study design. --- Moderate low Videoconferencing • Videoconferencing reduces loneliness by providing social support and improving the existing conditions in health care-type situations. --- • Financial investment needs to be considered when planning a videoconferencing intervention. --- Very low MIM c • MIM is recommended for rapid deployment as it is easy to use, and applications such as WhatsApp additionally allow the sharing of pictures, which can improve social connectedness. --- • MIM can replace email, but designers must be wary because any lack of responses can increase the perception of loneliness. --- Low Computer and internet training • Longer training periods are recommended with shorter-duration studies as they have been the most effective. --- • For reducing loneliness, group-based training is more effective than one-to-one training. --- • The study design should reflect whether the training or use of the intervention causes reduction in loneliness. --- • RCTs d are particularly important in the study design as they determine precise effect sizes. --- Very low Telecare • Frequency of contact combined with telecare solution influences the success of an intervention. Interventions designed for regular frequent contact are more successful than interventions delivered on-demand; for example, when a resident needs clinical attention. --- • Videoconferencing groups such as group counseling can help to reduce feelings of anxiety, isolation, and loneliness and provide emotional and social support; however, designers must understand that some participants do not immediately feel at ease with others, especially in a group setting. --- Moderate low Robotics • Pet robots can provide the same advantages as animal-assisted therapy in reducing loneliness and social isolation; study designs can mimic previous studies in this area. --- • Conversational agents provide companionship through social interaction, enabling older adults to connect with family members and friends . These agents can be effective and are recommended for intervention studies. --- • RCTs are recommended in the study design of robotic interactions, especially as this area is understudied. --- Very low Gaming • Video gaming devices such as Wii, which capture natural physical activities, are recommended as they reduce loneliness and provide better social interactions than typical video games. --- Very low 3D and augmented reality • Too few of the existing studies provide robust recommendations, and further longitudinal and cross-sectional RCT studies are needed in this area. a ICT: information and communications technology. --- Discussion --- Principal Findings This umbrella review, as highlighted in the analyzed reviews, found that different studies adopted a vast diversity of outcome measures and nonstandard definitions of loneliness and isolation [20,33,35,39,42], and therefore, heterogeneity, lack of clarity, and lack of consistency across reviews have influenced the interpretations of their findings. The strengths of the evidence for effectiveness ranged from very low to moderate . These low ratings were attributed to the poor overall quality of evidence, study design, and outcomes. However, our umbrella review showed that despite the heterogeneous quality and diverse scope of existing reviews, which prohibit the drawing of generalizable conclusions, technology can effectively target social disconnectedness in older adults [61,62]. An umbrella review following the JBI methodology [12,26] was warranted because the types of reviews, levels of evidence, and outcomes of different reviews range widely in quality, from meta-analyses to qualitative syntheses, and the availability of a wide range of reviews allows our umbrella review to comprehensively consolidate the current state of evidence on interventions for social connectedness. As highlighted in the analyzed reviews, different studies adopted a vast diversity of outcome measures and nonstandard definitions of loneliness and isolation [20,33,35,39,42], and therefore, heterogeneity, lack of clarity, and lack of consistency across reviews have influenced the interpretations of their findings. Many of the review authors included social isolation and loneliness interchangeably when selecting their intervention studies, failing to recognize that each condition is a component of social disconnectedness. This confusion weakens the recognition of differing results, as loneliness is generally more resistant to interventions than social isolation. Although some loneliness measures have been regularly adopted, the Lubben Social Connectedness Scale was applied in only 9 of the primary studies. This scale, which assesses an individual's psychological sense of belonging, might better reflect the interaction among different dimensions of social connectedness than commonly adopted measures [39,60]. Most of the primary studies developed their measures or used less common measures, such as the Self Anchoring Scale, Social Network Structure, Social Supportive Behavioral Scale, and Social Connectedness Index. The designs and qualities of the reviewed primary studies varied widely. Several reviews included RCTs and pilot, qualitative, and quantitative studies. In addition, the studies reviewed by Balki et al JMIR AGING --- XSL • FO --- RenderX Khosravi et al [42] were conducted across the health domain. The primary studies in each review are typically nonoverlapped, indicating that the reviewers' searches did not capture all relevant studies and sometimes omitted important studies and assessments of bias risk. The findings of many underlying primary studies in the reviews were compromised by poor study designs, leading to conflicting information. For example, when reviewing the effects of computer and internet training on loneliness, Chen and Schulz [35] reached an inconclusive verdict, Choi et al [20] reported a significant impact of the intervention, and Bornemann [33] demonstrated no significant effect of the intervention. Moreover, the effect size calculated by Bornemann [33] differed from the more accurate calculation by Choi et al [20], although both reviews shared 5 primary studies in their meta-analyses. The reviewers generally agreed on the effectiveness of group-based interventions. Reviews examining the designs of the reviewed studies noted group-based interventions yielded positive effects on social disconnectedness [24,36,37,40,41]. The different effects of group interventions can be attributed to the social interaction value of being in a group rather than the actual intervention [36,37]. When the intervention was delivered over a longer duration, the effect of the group activity diminished over time, and the intervention became less effective. Interventions with a participatory, productive, and collaborative focus [36], especially educational [37], appeared to realize an effective group-based intervention. The reviews varied in scope, from assessments of the effectiveness of interventions, such as videoconferencing, to overviews of studies published in the field. The inclusion criteria and quality assessments of the primary studies also differed among the reviews, diminishing confidence in their findings. Our study confirmed a low quality of evidence in this field, whereas improved technology interventions for older adults are increasingly demanded by both policymakers and health professionals. Although the existing guidelines can encourage standardization of systematic reviews, these guidelines were largely ignored by researchers; accordingly, the strength of the reviews is diminished, which in turn led to the quality of evidence GRADE scores also being generally low. The scope of the reviews varied from a specific focus on the effectiveness of a targeted intervention to an overview of the published studies in the field. The inclusion criteria for the primary studies and their quality assessment depended on tools used for rating rigor and bias. Such variations cast doubt on the conclusions of these reviews. This review confirms the lack of high-quality evidence in the field and highlights the failure to adhere to the existing guidelines. Standardization of systematic review reporting is expected to strengthen confidence in the review conclusions. Unlike their younger counterparts, older adults often lack the skills, functional capacity, and accessibility to adopt digital technology [63], which has led to the so-called "digital divide" among populations. However, these expansive categories are not mutually exclusive to older adults. In resource-restricted settings, they also incorporate gender differences, age, economic status, cultural practices, and educational qualifications [63] and can play an important role in reducing the existing digital divide between younger and older adults. Most of the reviews did not adequately consider these differences, presuming a general dearth of resources for older adults. Also important are the usability and design of the intervention, which were notably absent in the primary studies. The individual circumstances of older adults may influence the success of interventions. When usability was examined , it was done without the use of standardized usability measures, but usability did influence the effectiveness of the intervention; therefore, further exploration of this area is vitally important. To improve the quality of results, interventions should be tailored to match the specific needs of older adults, and sufficient training should be provided for using the interventions. This tailoring requires the involvement or participation of participants in training in a variety of formats [24,41]. As usability issues can reduce the effectiveness and uptake of an intervention, neglecting usability as an outcome measure reduces confidence in a holistic discussion of the effectiveness of an intervention. Thus, the potential impact of technology on social connectedness in older adults requires further investigation. --- Comparison With Prior Work Our umbrella review is one of the few works that have looked at technology interventions for social connectedness for loneliness, following a well-established systematic approach such as the JBI umbrella review method. In examining other works, we came across reviews that focused on interventions generally [64][65][66][67] [66,67], corresponding to a more expansive inclusion criterion, which limits more substantive findings about specific interventions and rather presents a broader scope of general findings. Our umbrella review, on the other hand, by focusing specifically on technology interventions, extends the understanding of specific technology interventions by reporting and rating the evidence. By doing so, we were able to identify current evidence gaps related to the understanding of the types of technologies, study design, and their impact on social connectedness, loneliness, and isolation. We were also able to identify weaknesses in the reviews and areas for future research. Balki et al JMIR AGING XSL • FO RenderX loneliness. Weak methodologies have limited the ability of reviews to establish conclusive remarks on their effectiveness [35,42]. Many outcome measures have greatly limited comparisons, which affected the interpretation of the results. --- Strengths and Limitations The present review may also have been biased by accepting only English-language publications. However, many of the shortcomings and limitations of this umbrella review stem from the underlying problems of the primary papers included in the reviews. Among the common shortcomings were small-scale implementations with small sample sizes, low levels of evidence, and short periods of assessment. Another recurring limitation was the inconsistent definitions of social concepts. Social concepts such as loneliness, social isolation, and social connectedness were formally defined, but the authors did not use these definitions consistently; instead, they were often used interchangeably, inherently confounding measurements of these outcomes. The reviews were generally heterogeneous in focus and discussed various interventions and syntheses of outcomes . Accordingly, the present review interchanges the terms social connectedness and social disconnectedness to describe combinations of singular aspects such as social isolation and loneliness. Nevertheless, the methodology was the greatest limitation. Finally, the absence of gray literature in the reviews may have increased publication bias and led to the lack of inclusion of evidence for interventions that are not typically indexed in bibliographic databases. Future systematic reviews should consider including gray literature in the included studies. The methodological limitations of the reviewed studies impaired the internal validity and usefulness of the reviews for technical and policy decision-making, as highlighted by the reviewers [20,24]. The reviews reported on diverse methodologies, including the use of nonstandardized outcome measures, which broaden the perspective but risk biasing the conclusions. Furthermore, as interventions vary widely in nature, direct comparisons are difficult, and the definitions of technology interventions are rather narrow in some studies [39]. The reviewed quantitative studies collected their data with questionnaires using scales developed for the study purpose. The reliability and validity of these nonstandardized scales are difficult to evaluate. Most reviews pointed out the suboptimal methodological quality of studies in this field, particularly the scarcity of RCTs and the dominance of quasi-experimental studies, which challenge the delivery of robust conclusions. Therefore, the results of this review should be interpreted with caution. --- Suggestions for Future Research and Policy Implications Various technology interventions in different formats offer many ways to engage older adults. However, usability was rarely discussed in the reviews and was not assessed as an outcome measure. Although the existing guidelines encourage the standardization of systematic reviews, they have not been followed with the required rigor. Equally, the underlying primary studies of the reviews failed to address causation in a rigorous study design, and their heterogeneity limited their generalizability. It appears that there is a need for more studies on the multidimensional impact of technology on social connectedness, along with the assessment of other measures that may be interacting with technology use . Robotics is a relatively new technology that has emerged to be promising, but there are very few studies in this domain. Research on mobile technology interventions for social isolation is also encouraged as mobile phone technology provides opportunities for increasing the uptake of technology interventions targeting loneliness in older adults. Our results on the grading of evidence revealed that the strength of evidence was generally low to very low, indicating that the efficacy of the interventions is unclear and that more rigorous research is needed. Our review provides insights into strategies to reduce loneliness and isolation for older adults using technology interventions, with implications for future research, policy, and practice. Attention to social connections needs to be incorporated into existing preventative efforts for chronic diseases in older adults. Chronic illnesses develop slowly over decades. Since social connectedness is known to impact multiple mechanistic pathways in both the development and progression of disease, it warrants attention in primary, secondary, and tertiary prevention efforts. Given the lower economic costs of technology interventions for individuals, families, employers, and the broader health care system, we urge health care and health policy professionals to prioritize the investigation of technology interventions for social connections in prevention efforts. --- Conclusions This umbrella review consolidates the state-of-the-art knowledge on the types of technology interventions that influence social connectedness in older adults and their effectiveness. The data were collected from the last 2 decades. Technology purportedly enables long-distance interactions, allowing older adults to become socially connected, obtain support, expand their social networks, and strengthen their existing ties. Some important themes that would improve the effectiveness of technical interventions for older adults emerged from the literature, namely group interventions, short-duration training and study programs, the use of general ICT, and videoconferencing. These implementations are more effective for maintaining existing connections than for building new ones. Certain technologies, such as robotics , AI-based conversational agents, and MIMs, show promising potential but have been underexplored. All of these mechanisms must be studied hand in hand to gain a complete understanding of these processes. Finally, in our GRADE evaluation, most of the evidence was rated as moderate low to very low, reflecting methodological issues, the small number of RCTs, diverse outcome measures and definitions, and mixed results. Such low scores highlight the need for high-quality research in this area. --- PROSPERO CRD42022363475; https://tinyurl.com/mdd6zds --- Conflicts of Interest None declared. --- Multimedia Appendix 1 Search terms used in database. [ License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Aging, is properly cited. The complete bibliographic information, a link to the original publication on https://aging.jmir.org, as well as this copyright and license information must be included.
The global population of older adults (aged >60 years) is expected to triple to 2 billion by 2050. Proportionate rises in older adults affected by loneliness and social isolation (or social connectedness) are expected. Rapid deployability and social changes have increased the availability of technological devices, creating new opportunities for older adults. Objective: This study aimed to identify, synthesize, and critically appraise the effectiveness of technology interventions improving social connectedness in older adults by assessing the quality of reviews, common observations, and derivable themes. Methods: Following the guidelines of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), 4 databases (PsycINFO, PubMed, Embase, and MEDLINE) were searched between February 2020 and March 2022. We identified reviews with adults aged ≥50 years in community and residential settings, reporting outcomes related to the impact of technologies on social disconnectedness with inclusion criteria based on the population, intervention, context, outcomes, and study schema-review-type articles (systematic, meta-analyses, integrative, and scoping)-and with digital interventions included. Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) was used to measure the strength of outcome recommendations including the risk of bias. The reviews covered 326 primary studies with 79,538 participants. Findings were extracted, synthesized, and organized according to emerging themes. Results: Overall, 972 publications met the initial search criteria, and 24 met our inclusion criteria. Revised Assessment of Multiple Systematic Reviews was used to assess the quality of the analysis. Eligible reviews (3/24, 12%) were excluded because of their low Revised Assessment of Multiple Systematic Reviews scores (<22). The included reviews were dedicated to information and communications technology (ICT; 11/24, 46%), videoconferencing (4/24, 17%), computer or internet training (3/24, 12%), telecare (2/24, 8%), social networking sites (2/24, 8%), and robotics (2/27, 8%). Although technology was found to improve social connectedness, its effectiveness depended on study design and is improved by shorter durations, longer training times, and the facilitation of existing relationships. ICT and videoconferencing showed the best results, followed by computer training. Social networking sites achieved mixed results. Robotics and augmented reality showed promising results but lacked sufficient data for informed conclusions. The overall quality of the studies based on GRADE was medium low to very low.Technology interventions can improve social connectedness in older adults. The specific effectiveness rates favor ICT and videoconferencing, but with limited evidence, as indicated by low GRADE ratings. Future intervention and study design guidelines should carefully assess the methodological quality of studies and the overall certainty of specific outcome measures. The lack of randomized controlled trials in underlying primary studies (<28%) and suboptimal methodologies limited our findings. Robotics and augmented or virtual reality warrant further research. Low GRADE scores highlight the need for high-quality research in these areas.
Introduction The United States is in the midst of a housing affordability crisis. Because income growth has not kept pace with rising housing costs, fewer and fewer families are able to afford housing [1]. Owing to structural racism in labor and housing markets, Black and Latinx households are disproportionately affected, with single mothers with children facing the highest risk [2][3][4][5][6]. In 2017, a majority of low-income families with children were forced to spend over half of their household income on housing [7]. This affordable housing shortage coincides with increases in evictions and homelessness, two severe forms of housing insecurity [8,9]. Housing insecurity among pregnant women takes a toll on maternal mental health and the health of a fetus. Depression, anxiety, and stress increase a woman's likelihood of delivering preterm or giving birth to a low birth weight infant [10][11][12]. In a study of low-income, urban mothers, moving two or more times in the past two years was associated with a 1.7 times the odds of depression and a 2.5 times the odds of generalized anxiety disorder, relative to mothers who moved less [13]. In the same population of mothers, eviction was associated with a 21% increase in the probability of depression and a 19% increase in self-reported parenting stress [14]. Subsequently, a number of studies have found housing insecurity and homelessness during pregnancy to be associated with pregnancy complications [15,16], preterm birth, and low birth weight [17][18][19]. Recent studies show that pregnant women living in neighborhoods rendered unstable by evictions and tax foreclosures may be more likely to deliver very low birth weight or preterm infants, particularly if the women have low educational attainment [20,21]. Preterm and low birth weight infants are more likely to require costly intensive care following delivery and may face challenges with physical, cognitive, and social/emotional development later in life [22]. Despite this growing body of evidence, there are important gaps in our knowledge about how housing insecurity in the U.S. may influence the health of infants at birth into childhood. Past studies have not examined the association between housing insecurity on costly birth outcomes such as stays in neonatal intensive care units and extended hospital stays after delivery. Moreover, studies to date have not examined associations between prenatal housing insecurity and children's health beyond birth. Uncovering the relationship between prenatal housing insecurity on birth-related healthcare utilization and later infant health and development outcomes will add to our understanding of the costs associated with the housing affordability crisis, both human and economic. To fill these gaps, we conduct analyses testing a hypothesized link between severe housing insecurity during pregnancy and adverse health outcomes measured at birth and during infancy in a cohort of low income, urban mothers and infants. Specific outcomes include low birth weight or preterm birth, NICU or stepdown facility stays, and extended hospital stays after delivery, as well as parent-reported health and temperament at age one. Our findings allow stakeholders to gauge population health implications of reducing severe housing insecurity among low-income, pregnant women in U.S. cities. Because evictions and adverse birth outcomes are both concentrated in communities of color, these results have important implications for health equity and social justice. --- Methods --- Study Population We utilized data from the Fragile Families and Child Wellbeing Cohort Study . The study is a birth cohort of nearly 5000 children born to "fragile families" in 20 U.S. cities with populations greater than 200,000 between 1998 and 2000 and followed for 15+ years. The study methods are described in detail in previous publications [23]. Within study cities, live births were randomly selected for participation within strata of marital vs. non-marital births. To be eligible, infants needed two living, English-or Spanish-speaking biological parents. At birth, parents were surveyed for demographic and socioeconomic information, and the study team systematically abstracted information from medical records from the pregnancy and birth, provided that mothers gave consent and hospitals authorized access. The Princeton University Institutional Review Board approved FFCWS study protocols and all participants provided informed consent. Our study population was comprised of infants for whom maternal medical records were available, who had available information on length of hospital stay and method of delivery, and for whom infant outcomes were available at the age one study visit. We excluded infants from multiple gestation pregnancies and those with congenital chromosomal and central nervous system abnormalities because of systematically different birth and infant outcomes in these groups. The Johns Hopkins Bloomberg School of Public Health Institutional Review Board reviewed protocols for this secondary data analysis and determined the project to be non-Human Subjects Research. --- Measures Severe housing insecurity during pregnancy was the main exposure of interest. The binary indicator was abstracted retrospectively from the mother's medical records and indicates whether there was any mention of "homelessness or threatened eviction" during pregnancy. Because clinicians may have limited knowledge of patients' housing situation outside of crisis situations, this definition likely fails to capture less severe forms of housing insecurity. We measured three adverse birth outcomes abstracted from medical records. The first outcome was a composite outcome of low birth weight and/or preterm birth [24]. The second outcome was an indicator of whether the infant stayed in a NICU or intermediate/stepdown facility for any length of time after birth. The third outcome identified infants with extended hospitalization after delivery, defined as greater than two days for vaginally delivered infants or greater than four days for infants delivered by cesarean section [25]. At age one, we measured two infant health outcomes. The first was a measure of infant health, as reported by the infant's primary caregiver, defined by FFCWS as the biologic parent or adult who lives with the index child at least half of the time, defaulting to the mother if the infant lives with both biologic parents. The caregiver was asked to rate their infant's general health on a scale as excellent, very good, good, fair, or poor, which we dichotomized to fair or poor health versus excellent, very good, or good. We also constructed a measure of temperament based on three items from the emotionality subscale of the Emotionality, Activity, and Sociability Temperament Survey for Children: Parental Ratings [26]. Items included in the scale are "he/she often fusses and cries," "he/she gets upset easily," and "he/she reacts strongly when upset," each of which parents rated from one to five . Temperament score at age one is significantly correlated with externalizing behavior at age five among FFCWS participants [27]. Consistent with previous FFCWS analyses using this temperament scale [27][28][29], factor analysis indicated that the three items represented a single factor, with moderate internal consistency . We summed standardized factor loadings for each of the items to generate a weighted score for temperament with a mean of zero and a standard deviation of approximately one. Infants with temperament scores in the top-most quintile were classified as having a poor temperament. We included several variables as controls in our analytic models out of concern that they may confound the relationship between severe housing insecurity during pregnancy and birth and infant health outcomes. These variables were largely maternal factors: age group , race/ethnicity, poverty level , educational attainment , marital status, pre-pregnancy mental health status, substance use during pregnancy , and a composite indicator of preexisting conditions . We also included an indicator for infant sex. --- Statistical Analysis We first compared the distributions of maternal and infant factors as well as birth and infant outcomes by severe housing insecurity during pregnancy, testing for differences with chi square statistics. For each birth and infant outcome, we constructed a separate regression model, with housing insecurity as the main exposure. We used Poisson regression with robust variance to approximate log binomial regression models and estimate crude and adjusted risk ratios and 95% confidence intervals for each outcome. We used generalized estimating equations to estimate population-averaged associations, accounting for clustering by the infant's city of birth. For all models of outcomes other than low birth weight and/or preterm birth, we included an indicator for low birth weight and/or prematurity to gauge the degree to which severe housing insecurity acts on outcomes directly vs. indirectly operating indirectly through low birth weight or preterm birth . hypertension, renal disease, diabetes, lung disease, heart disease, and/or anemia). We also included an indicator for infant sex. --- Statistical Analysis We first compared the distributions of maternal and infant factors as well as birth and infant outcomes by severe housing insecurity during pregnancy, testing for differences with chi square statistics. For each birth and infant outcome, we constructed a separate regression model, with housing insecurity as the main exposure. We used Poisson regression with robust variance to approximate log binomial regression models and estimate crude and adjusted risk ratios and 95% confidence intervals for each outcome. We used generalized estimating equations to estimate populationaveraged associations, accounting for clustering by the infant's city of birth. For all models of outcomes other than low birth weight and/or preterm birth, we included an indicator for low birth weight and/or prematurity to gauge the degree to which severe housing insecurity acts on outcomes directly vs. indirectly operating indirectly through low birth weight or preterm birth . We then estimated the population attributable fraction using the formula derived by Miettinen [30] and recommended by Rockhill to produce internally valid estimates of PAF when confounding exists [31]. The formula is as follows: pd * aRR where "pd" indicates the proportion of cases exposed to severe housing insecurity and "aRR" indicating the adjusted risk ratio measuring the association between severe housing insecurity and a given outcome. If we assume that the associations are causal, the PAF can be interpreted as the proportion of outcomes that could be avoided by eliminating severe housing insecurity during pregnancy in the study population. We conducted all analyses using Stata version 15.1 , and a p-value <0.05 was used to indicate statistical significance. Data were analyzed from 2019-2020. --- Results Out of the 4898 mother-infant dyads enrolled in FFCWS, 3428 were included in analyses of birth outcomes, while 3035 were included in analyses of infant outcomes. Figure 2 is flow diagram detailing the specification of the two study populations. Medical records were unavailable for 25% of the source population, chiefly for hospital-level reasons using the formula derived by Miettinen [30] and recommended by Rockhill to produce internally valid estimates of PAF when confounding exists [31]. The formula is as follows: pd * aRR where "pd" indicates the proportion of cases exposed to severe housing insecurity and "aRR" indicating the adjusted risk ratio measuring the association between severe housing insecurity and a given outcome. If we assume that the associations are causal, the PAF can be interpreted as the proportion of outcomes that could be avoided by eliminating severe housing insecurity during pregnancy in the study population. We conducted all analyses using Stata version 15.1 , and a p-value <0.05 was used to indicate statistical significance. Data were analyzed from 2019-2020. --- Results Out of the 4898 mother-infant dyads enrolled in FFCWS, 3428 were included in analyses of birth outcomes, while 3035 were included in analyses of infant outcomes. Figure 2 is flow diagram detailing the specification of the two study populations. Medical records were unavailable for 25% of the source population, chiefly for hospital-level reasons . We compared characteristics of the source population , the birth outcomes study population, and the infant outcomes study population and found no meaningful differences . sharing them). We compared characteristics of the source population , the birth outcomes study population, and the infant outcomes study population and found no meaningful differences . Among the 3428 mother-infant dyads included in the birth outcome analyses, 1.6% had a record of severe housing insecurity during pregnancy . Mothers with severe housing insecurity had similar age distributions to other mothers in the sample and their infants were equally likely to be female. On all other factors, the two groups diverged significantly . Among the 3428 mother-infant dyads included in the birth outcome analyses, 1.6% had a record of severe housing insecurity during pregnancy . Mothers with severe housing insecurity had similar age distributions to other mothers in the sample and their infants were equally likely to be female. On all other factors, the two groups diverged significantly . The crude risk of each outcome was high in the comparison group and considerably higher among infants born to severely housing insecure mothers. Fourteen percent of infants born to mothers with no reported housing insecurity were born preterm or with low birth weight, 16% stayed in NICUs or stepdown facilities, and 17% had extended hospital stays after delivery. Among infants born to severely housing insecure mothers, these proportions were much higher, at 38, 40, and 42%, respectively. Whereas 3% of infants in the comparison group had fair or poor health at age 1 and 21% had a poor temperament, 10 and 44% of infants in the housing insecure group experienced these adverse outcomes, respectively. Crude risk estimates are depicted in Figure 3A. In adjusted models, women experiencing severe housing insecurity during pregnancy had 1.73 times the risk of low birth weight and/or preterm birth compared to women who did not experience severe housing insecurity . Severe housing insecurity during pregnancy was also associated with 1.64 times the risk of an infant staying in the NICU or a stepdown facility and 1.66 times the risk of an extended hospitalization following delivery . The associations with NICU/stepdown stay and extended hospitalization were attenuated when we added low birth weight or preterm birth to the model, although the risk ratio for extended hospitalization remained statistically significant. Infants born to mothers who experienced severe housing insecurity during pregnancy had 2.62 times the risk of fair or poor health at age one compared to infants born to women with more housing security during pregnancy, although the results were not statistically significant . These infants were also 1.52 times as likely as others to have a poor temperament score . When we included an indicator for low birth weight and/or preterm birth in models, the risk ratio for fair or poor health was attenuated slightly, whereas the risk ratio for temperament remained largely unchanged. Among the 3428 mother-infant dyads included in the birth outcome analyses, 1.6% had a record of severe housing insecurity during pregnancy . Mothers with severe housing insecurity had similar age distributions to other mothers in the sample and their infants were equally likely to be female. On all other factors, the two groups diverged significantly . sharing them). We compared characteristics of the source populati the birth outcomes study population, and the infant outcomes s meaningful differences . Among the 3428 mother-infant dyads included in the birth out of severe housing insecurity during pregnancy . Mothers w similar age distributions to other mothers in the sample and their female. On all other factors, the two groups diverged significantly . Severe housing insecurity during pregnancy was also associated with 1.64 times the risk of an infant staying in the NICU or a stepdown facility and 1.66 times the risk of an extended hospitalization following delivery . The associations with NICU/stepdown stay and extended hospitalization were attenuated when we added low birth weight or preterm birth to the model, although the risk ratio for extended hospitalization remained statistically significant. Infants born to mothers who experienced severe housing insecurity during pregnancy had 2.62 times the risk of fair or poor health at age one If these associations are causal, eliminating severe housing insecurity pregnancy may result in the following reductions in negative birth and infant outcomes at the population level: PAF = 1.8% of low birth weight or preterm birth ; PAF = 1.6% of NICU or stepdown facility stays ; 1.6% of extended hospital stays after delivery ; PAF = 2.7% of fair or poor infant health ; PAF = 0.9% of low infant temperament scores in the study population . Ninety-five percent confidence intervals for PAF estimates related to infant outcomes overlapped zero, indicating that the results were not statistically significant. --- Discussion We tested whether severe housing insecurity during pregnancy is linked to adverse child health outcomes measured at birth and age one. In this sample of disproportionately unmarried, low-income mother-child dyads from 20 U.S. cities, there was a 73% higher risk of low birth weight or preterm birth among infants born to mothers who experienced severe housing insecurity during pregnancy. We also found statistically significant increases in the risk of NICU or stepdown stays and extended hospital stays after delivery among these dyads. At one year of age, infants of women who experienced severe housing insecurity while pregnant were 2.6 times more likely than others to have fair or poor health and 1.5 times more likely than others to have a poor temperament score, although these differences were not statistically significant. Compared to estimates from primary models, estimates from models including birth outcomes were, in general, attenuated toward the null. This result is consistent with our hypothesis that some proportion of the associations we see between severe housing insecurity and healthcare and infant outcomes was related to the infant's preterm or with low birth weight status. Population attributable fraction estimates suggest that the United States could avoid approximately 1.8% of low birth weight or preterm birth, 1.6% of NICU or stepdown facility stays, 1.6% of extended hospital stays after delivery, 2.7% of fair or poor infant health, and 0.9% of poor infant temperament by eliminating severe housing insecurity among low income, pregnant women in its large cities. While these percentages may seem modest, they represent significant reductions in outcomes that are both common and costly in the U.S., particularly among infants born to disadvantaged women. Nearly 700,000 infants are born to low socioeconomic status, urban mothers each year in the U.S., around fourteen percent of them have low birth weight and/or are preterm [32]. Per our results, if we eliminated severe housing insecurity in this disadvantaged group of mothers, we would expect to see approximately 2000 more term infants born full term and with normal birth weight and, consequently, better health prospects and lower healthcare costs. A 2007 study found that the average preterm infant stays in the NICU for 17.6 days, incurring nearly USD 31,000 in costs [33]. A back-of-the-envelope calculation indicates that these seemingly modest reductions in adverse birth outcomes could translate to USD 84 million in annual savings to the healthcare system and to lower-income families. These savings pertain to birth costs alone, not accounting for savings related to the mother's health or the infant's improved health as they grow into childhood, and so are an underestimate of savings across the life course. In fact, research from our group suggests a bi-directional relationship between adverse birth outcomes and housing insecurity, with adverse birth outcomes predicting future eviction risk [34]. As such, promoting housing security during pregnancy may have a multiplicative effect, protecting against future housing insecurity. Future research could use a more rigorous approach to derive a comprehensive estimate of costs stemming from housing insecurity-associated adverse birth outcomes. Such an analysis was beyond the scope of this article. It should be noted that the women experiencing severe housing insecurity in our sample were likely exposed to multiple social determinants of health concurrently. For example, evidence suggests that people experiencing housing insecurity also struggle to access healthcare [35] and adequate nutrition [36], while also being exposed to higher levels of community violence [37] and environmental toxins [38]. For the purposes of this study, we control for factors that we believe to be "upstream" of housing insecurity, including multiple indicators of socioeconomic status and race/ethnicity as a proxy for experienced racism, but do not delve into co-occurring social determinants of health or specific biologic or social mechanisms through which maternal housing insecurity may lead to adverse birth outcomes. We believe our estimates to be conservative due to two main data limitations. First, two covariates included in multivariable models could be mediators rather than confounders of the associations due to the time at which they were measured. Maternal poverty and substance use were first measured at birth and during pregnancy, respectively, and thus could represent downstream effects of housing insecurity rather than causes. By including these potential mediators in statistical models, risk ratios estimating associations between severe housing insecurity and outcomes may be biased toward the null, resulting in conservative estimates. Second, measurement error in the exposure could also result in conservative estimates. Clinician notes in the mother's medical record regarding homelessness or threatened eviction were likely to capture only the most extreme cases of housing insecurity-street homelessness, for example. We expect that a considerable number of women currently considered housing secure in our analysis in fact experienced some degree of housing insecurity during pregnancy. We hypothesize that this misclassification would lead to an excess risk of negative birth and infant outcomes in the comparison group, attenuating estimates. This issue may be compounded in our calculation of population attributable fraction, given that PAFs are a function not only of effect size, but also the prevalence of the exposure, which, again, is likely underestimated in our study population. Housing insecurity has grown dramatically since these data were collected as a consequence of the housing affordability crisis [1]. Currently, in the context of the COVID-19 pandemic and financial crisis, 40% of U.S. renters are struggling to pay rent [39] and 30-40 million people are at risk of eviction [40]. With so many renters facing displacement, the population-level impact of severe housing insecurity on birth and infant outcomes may be larger and more relevant today than ever before. --- Conclusions Even our conservative estimates suggest that severe housing insecurity during pregnancy contributes to adverse birth and infant outcomes. Because evictions and adverse birth outcomes disproportionately affect Black and Latinx mothers and infants, these results also suggest an opportunity to narrow disparities in birth outcomes. Given the current housing affordability crisis in the U.S., these results warrant attention from clinicians and policymakers. Clinically, our results suggest that prenatal screening and referrals to prevent maternal evictions and homelessness could improve birth and infant outcomes. Thinking more upstream, the results underscore a need for policies to lessen the burden of housing insecurity among pregnant women. Across the country, city and state governments are considering initiatives to increase the stock of affordable housing and prevent evictions . Our results suggest that pregnant women and their infants stand to benefit greatly from these policy interventions. Author Contributions: K.M.L.: conceptualization, methodology, validation, formal analysis, data curation, writing-original draft, writing-review and editing, funding acquisition. G.L.S.: methodology, validation, writing-review and editing. C.E.P.: writing-review and editing, conceptualization. M.M.B.: writing-review and editing, conceptualization. K.J.E.: writing-review and editing, investigation. J.M.J.: conceptualization, methodology, writing-review and editing, supervision, funding acquisition. K.N.A.: conceptualization, methodology, writing-review and editing, supervision. All authors have read and agreed to the published version of the manuscript. --- Funding: The Fragile Families and Child Wellbeing Study was supported by the National Institute of Child Health and Human Development under award numbers R01HD36916, R01HD39135, and R01HD40421, as well as a consortium of private foundations. K.M. Leifheit and G.L. Schwartz attended the 2018 Fragile Families Summer Data Workshop at Columbia University, supported by NICHD training workshop grant . K.M. Leifheit was supported by an NICHD Pre-Doctoral Fellowship and an Agency for Healthcare Research and Quality Post-Doctoral Fellowship . C.E. Pollack received grants from NICHD and from the National Institute for Environmental Health Sciences . The funders had no role in study design, collection, analysis, and interpretation of data, writing this report, or the decision to submit the report for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. --- Conflicts of Interest: Schwartz, Edin, Black, Jennings, and Althoff declare no conflicts of interest. Pollack owns stock in Gilead Pharmaceuticals. The work detailed here does not evaluate any specific drug or intervention produced by Gilead. Pollack is an unpaid member of Enterprise Community Partners' Health Advisory Council and was a paid consultant to the Open Communities Alliance. Pollack works part time on a temporary assignment with the Department of Housing and Urban Development , assisting the department on housing and health issues. The findings and conclusions in this article are those of the authors and do not necessarily represent those of HUD or other government agencies. Leifheit has provided expert testimony to legislative bodies regarding potential public health effects of eviction. Leifheit, Pollack, and Schwartz are listed as amici curiae in a public health amici curiae brief in support of the Centers for Disease Control and Prevention's eviction moratorium. --- Appendix A
Introduction: Housing insecurity is increasingly commonplace among disadvantaged women and children. We measured the individual-and population-level impact of severe housing insecurity during pregnancy on adverse birth and infant outcomes. Methods: We analyzed data from 3428 mother-infant dyads enrolled in the Fragile Families and Child Wellbeing Study, a prospective cohort study representing births in 20 large U.S. cities from 1998 to 2000. Severe housing insecurity was defined as threatened eviction or homelessness during pregnancy. Outcomes included low birth weight and/or preterm birth, admission to a neonatal intensive care unit (NICU) or stepdown facility, extended hospitalization after delivery, and infant health and temperament. We estimated exposure-outcome associations with risk ratios adjusted for pre-pregnancy maternal sociodemographic and heath factors and calculated a population attributable fraction (PAF) of outcomes attributable to severe housing insecurity. Results: We found statistically significant associations between severe housing insecurity during pregnancy and low birth weight and/or preterm birth (risk ratio (RR] 1.73, 95% confidence interval (CI) 1.28, 2.32), NICU or stepdown stay (RR 1.64, CI 1.17, 2.31), and extended hospitalization (RR 1.66, CI 1.28, 2.16). Associations between housing insecurity and infant fair or poor health (RR 2.62, CI 0.91, 7.48) and poor temperament (RR 1.52, CI 0.98, 2.34) were not statistically significant. PAF estimates ranged from 0.9-2.7%, suggesting that up to three percent of adverse birth and infant outcomes could be avoided by eliminating severe housing insecurity among low-income, pregnant women in US cities. Conclusions: Results suggest that housing insecurity during pregnancy shapes neonatal and infant health in disadvantaged urban families.
Introduction Previous studies have demonstrated the presence of geographic health inequalities between regions, between countries, and within countries [1,2]. The bulk of studies on social and geographic inequalities in health have derived primarily from the United States and western European countries [3][4][5][6][7][8]. Meanwhile, although Japan has the lowest mortality in developed world, the magnitude and patterning of health inequalities within the nation remains less understood. Recently, Suzuki et al [9] examined the time-trends in social and geographic inequalities in all-cause premature adult mortality in Japan, which suggested that spatial health disparities have widened in both sexes during the decades following the collapse of the asset bubble in the early 1990s. According to this study, geographic inequalities across 47 prefectures have increased since 1995 even after adjusting for individual age and occupation in each prefecture, providing suggestive evidence of common ecologic effects of place where people live [10]. In the present study, we further examine the emerging geographic inequalities in all-cause adult mortality across prefectures in both sexes, in terms of compositional effects and contextual effects [11]. In so doing, we sought to establish whether or not the pattern of geographic inequalities in the nation is largely reflective of the variation in the composition of the areas. We hypothesized that the relative contribution of composition and context in each prefecture could substantially vary across areas, and thus the findings of the present study are expected to be very useful in providing clearer implications to mitigate the emerging geographic inequalities across prefectures. In line with most literature on area effects on health [12], we used sex, age, and occupation as a measure of composition whereas we used prefecture-level socioeconomic status as a measure of context. To provide a comprehensive perspective, the data of this study are census based and cover the whole of Japan. --- Methods --- Vital statistics and census data Data on deaths was obtained from the Report of Vital Statistics: Occupational and Industrial Aspects [13], which is compiled by the Ministry of Health, Labour and Welfare every five fiscal years since 1970, coinciding with the Population Census. The latest year for which data are available is 2005. In the death notifications, respondents are asked to fill in the decedent's occupation at the time of death, and one of the following persons is obliged to submit the notification: relatives who lived with the decedent, other housemates, landlord, estate owner, land/house agent, or relatives who do not live with the decedent [14]. In 2005 fiscal year , occupation at the time of death was recorded for each decedent following the fourth revision of the Japan Standard Occupational Classification [15], which includes the following 11 groups: specialist and technical workers, administrative and managerial workers, clerical workers, sales workers, service workers, security workers, agriculture, forestry and fishery workers, transport and communication workers, production process and related workers, workers not classifiable by occupation, and non-employed . Note that the group ''nonemployed'' includes the unemployed as well as the non-labor force . Although the Census distinguishes the unemployed from home-makers, the vital records combine these categories as ''non-employed.'' Denominator data for the calculation of mortality rates was obtained from the Population Census which has been conducted by the Ministry of Internal Affairs and Communications every five years since 1920 [16]. The 2005 Population Census was taken as of October 1, 2005. In the Census questionnaire, occupation was assessed by the following question [16]: ''Description of work -Describe in detail the duties you are assigned to perform.'' The questionnaires are delivered to every household, and one person in each household completes it on behalf of the household members. We used ''production process and related workers'' as the referent category because they were the largest and the second largest occupational category in men and women, respectively, excluding non-employed. We restricted the analysis to those who are aged 25 or older to exclude students. Further, deaths records missing information on age or residence were excluded from the analysis, along with records with populations of 0 as well as cells with proportions being exceeding 1. As a result, the total number of decedents was 524,785 men and 455,863 women, in 47 prefectures . --- Measures of prefecture-level socioeconomic status We derived prefecture-level socioeconomic status variables from the National Survey of Family Income and Expenditure [17], which has been conducted by the Ministry of Internal Affairs and Commu-nications every five years since 1959. We obtained the following three variables for each prefecture from the 2004 Survey and divided them into tertiles; Gini coefficient for yearly income, average yearly income, and average savings [17]. These variables were calculated among two-or-more-person households. Although household income and savings may follow skewed distributions, median income or savings were not available. --- Statistical analysis The data had a two-level structure of 5,687 cells for men and 5,617 cells for women at level 1, nested within 47 prefectures at level 2. Each prefecture had a maximum 121 cells , and the maximum number of cells in the present data set was 5,687 for each sex . We thus conducted gender-specific two-level logistic regression analysis to model mortality risk as a function of age, occupation, and residence in 47 prefectures. We used multilevel statistical procedures because of their ability to model complex variance structures at multiple levels [18]. The lowest unit of analysis was ''cells,'' and our models are structurally identical to models with individuals at level 1 [19]. The response variable, proportion of deaths in each cell, was modeled with allowances made for the varying denominator in each cell. We estimated a multilevel binomial logit link model, which consisted of a fixed part and a random part. Based on the results of the fixed part, we can estimate the relations between occupation and mortality, conditional on individual age variation, while the results of the random part allow estimation of prefecturelevel variations in the risk of mortality. The prefecture-level variance was used as an estimate of geographic inequalities in mortality. The importance of measures of between-area variation has been emphasized for a better understanding of the sociospatial patterning of health [12,[20][21][22]. To fit the models, we used Bayesian estimation procedures as implemented via Markov chain Monte Carlo methods by using MLwiN 2.25 [23,24]. We used default diffuse priors for all the parameters, meaning that we did not favor a priori any particular values of the estimates [24]. We obtained maximumlikelihood estimates for starting values of the distribution, then 500 simulations as discarded burn-in, then 50,000 further simulations to get the distribution of interest. Based on the mean as well as the 2.5th and 97.5th percentiles of the posterior distributions, odds ratios and 95% credible intervals for all-cause mortality were obtained for each variable. We used the Deviance Information Criterion to compare the goodness-of-fit of each model [24]. The DIC statistic is a combination of the fit to the data and complexity, with larger DIC values suggesting worse performance. To present the results of geographic inequalities in mortality, we created maps showing prefecture-level residuals by using ArcGIS . First, we examined the prefecture-level variance in mortality without including any explanatory variables as follows: logit p ij À Á ~b0 zu 0j , ½null model where p ij is a proportion of deaths in cell i in prefecture j. Prefecture-level random effect of the intercept was assumed to be normally distributed with a mean of 0 and variance, s 2 u 0 . Based on the prefecture-level variance, prefectures were ranked by ORs, with the reference being the grand mean of all prefectures , and uncertainty was estimated by 95% CIs. Note that an estimate of the parameter b 0 in null model represents an estimate of logarithm of the grand-mean odds for mortality among all the cell types across 47 prefectures. Then, we entered age and 11 occupations as level-1 variables as follows: logit p ij À Á ~b0 z P 10 k~1 b k x kij z P 10 l~1 c l w lij zu 0j , ½model1 where x kij and w lij denote 10 dummy variables of age and occupation, respectively, of cell i in prefecture j. Like null model, based on the ''adjusted'' prefecture-level variance, prefectures were ranked by ORs, with the reference being the grand mean of all prefectures , and uncertainty was estimated by 95% CIs. Note that an estimate of the parameter b 0 in model 1 represents an estimate of logarithm of the grand-mean odds for mortality among production process and related workers aged 25 to 29 years across 47 prefectures. Subsequently, to explore the possible contextual effects by arealevel deprivation, the prefecture-level socioeconomic status variable was entered into model 1 separately. Furthermore, to examine the joint effects of income inequality and average income/savings, we also entered Gini coefficient and average yearly income/savings into the model simultaneously. We repeated these analyses by stratifying the subjects into those aged less than 65 and those aged 65 or older. --- Supplementary analyses As a supplementary analysis, we examined occupation-specific geographic inequalities in mortality. In this analysis, following the previous report of the Population Census [25], we summarized the 11 occupations into 6 groups to increase the statistical power as follows: I. clerical, technical and managerial occupations specialist and technical workers, administrative and managerial workers, and clerical workers), II. sales and service occupations sales workers, service workers, and security workers), III. agriculture, forestry and fishery occupations agriculture, forestry and fishery workers), IV. production and transport occupations transport and communication workers and production process and related workers), V. unclassifiable occupations workers not classifiable by occupation), and VI. non-employed non-employed). Then, we entered 6 prefecture-level random effect terms corresponding to the 6 aggregated occupational groups into model 1 in order to allow the fixed occupational differential on mortality to vary randomly across prefectures as follows: logit p ij À Á ~b0 z P 10 k~1 b k x kij z P 10 l~1 c l w lij z P 6 m~1 u mj W mij , ½model 2 where W 1ij , W 2ij , W 3ij , W 4ij , W 5ij , and W 6ij denote coding variables for clerical, technical and managerial occupations, sales and service occupations, agriculture, forestry and fishery occupations, production and transport occupations, unclassifiable occupations, and non-employed, respectively, of cell i in prefecture j. Thus, u 1j , u 2j , u 3j , u 4j , u 5j , and u 6j represent prefecture-level random effects among the corresponding occupations. They were assumed to be normally distributed with a mean of 0 and variances of s 2 u 1 , s 2 u 2 , s 2 u 3 , s 2 u 4 , s 2 u 5 , and s 2 u 6 , respectively. We ranked prefectures by the 6 aggregated occupational groups based on the prefecture-level occupation-specific variances. Finally, to calculate the mean predicted probabilities of mortality for the 6 occupational groups, we removed 10 dummy variables of occupations in model 2, and entered 5 dummy variables for the 6 occupational groups as level-1 variables as follows: logit p ij À Á ~b0 z P 10 k~1 b k x kij z P 6 m~2 d m W mij z P 6 m~1 u mj W mij : ½model 3 We calculated the predicted probabilities of mortality among those aged 55 to 59 because they constitute the largest population in both sexes . Note that, in models 2 and 3, we did not allow the intercept to vary across prefectures; rather we employed separate coding for each prefecture-level random effect term [19]. --- Results --- Overall geographic inequalities in all-cause mortality In Figures 2 and3, we show the results of geographic inequalities in all-cause mortality across 47 prefectures among men and women, respectively. Note that these Figures show both unadjusted and adjusted prefecture-level residuals for mortality based on the results of the random part in null model and model 1, respectively. Overall, the degree of geographic inequalities was more pronounced in null model . In null model, estimates of variances of the intercepts for men and women were 0.025 : 0.005) and 0.023 , respectively, and unadjusted prefecture-specific ORs for mortality ranged from 0.681 in Saitama . When adjusting for age and occupations in model 1, almost all of the prefecture-level residuals moved toward the null . We should note that the degree of change varied substantially across 47 prefectures. In some prefectures, adjustment for age and occupation yielded little change in ORs, while other prefectures exhibited striking changes. For example, as noted above, Saitama ranked at the top in null model with more than 30% lower odds for mortality in both sexes, whereas Kochi ranked at the bottom with more than 20% higher odds for mortality in both sexes. However, once we adjusted for their composition in model 1, the point estimates of ORs became close to 1, and none of them were statistically significant. In other words, Saitama and Kochi were seemingly the best and the worse prefectures, respectively, in terms of the risk for all-cause mortality, which is likely due to their composition, not context. Notably, we observed qualitative changes of ORs in some prefectures -from significantly higher ORs to significantly lower ORs, and vice versa. For example, among men, the ORs in Shimane , Kumamoto , and Kagoshima were significantly high when we did not adjust for the composition in each prefecture while they became significantly low adjusting for their composition . By contrast, in Tochigi , Chiba , Tokyo , Shizuoka , Aichi , and Kyoto , the pattern was reversed. The results of geographic inequalities among men and women are also shown by using maps in Figures 4 and5, respectively. Figures S1 andS2 show the patterns of age-stratified geographic inequalities among men and women, respectively. Overall, the patterns were relatively similar between the age groups when we adjusted for compositions in each prefecture although we observed qualitative changes between age groups in some prefectures; for example, we observed significantly low odds for mortality among those aged less than 65 in both sexes in Chiba whereas we observed significantly high odds for mortality among those aged 65 or older in both sexes. --- Contextual effects by prefecture-level socioeconomic status Overall, we found little evidence of the association between prefecture-level socioeconomic status and the risk of mortality in both sexes, conditional on individual age and occupation . When we stratified the subjects by age, however, there was a suggestion of an inverse association between average savings and mortality among men aged less than 65 . No clear patterns were observed for other indicators of prefecture-level socioeconomic status. When we examined the joint effects of income inequalities and average income/savings, no substantial changes was observed . --- Geographic inequalities in all-cause mortality by occupational groups Based on the results of the random part in model 2, Table 4 shows variations in all-cause mortality across 47 prefectures by the 6 aggregated occupational groups. In both sexes, unclassifiable occupations had the highest variation, and the variations among non-employed were close to 0. Among men, the variation was higher in non-manual workers than manual workers , whereas the pattern was reversed among women. Overall chi-squared values of the random parts in model 2 for men Figure 3. Unadjusted and adjusted prefecture-level residuals for all-cause mortality among women in 47 prefectures, Japan, 2005. Prefecture-level residuals are described in odds ratios with the reference being the grand mean of all prefectures. Red diamond and blue square represent point estimates of residuals from null model and model 1, respectively. Horizontal bars represent their 95% credible intervals. Prefectures with a lower estimate of odds for all-cause mortality are ranked higher. Note that CI and OR stand for credible interval and odds ratio, respectively. doi:10.1371/journal.pone.0039876.g003 and women were 82.375 and 70.504 , respectively. In Tables S3 andS4, we show the rankings of 47 prefectures by the occupational groups among men and women, respectively. The corresponding patterns of geographic inequalities are also illustrated using maps in Figures S3 andS4, respectively. Table S5 shows the results of prefecture-level variance and covariance among the 6 occupational groups. Overall, men and women revealed a similar pattern . In both sexes, the correlation coefficients between I. clerical, technical and managerial occupations and II. sales and service occupations were high , and the correlation coefficients between II. sales and service occupations and IV. production and transport occupations were also high . Although we observed a strong correlation between I. clerical, technical and managerial occupations and IV. production and transport occupations among men , we did not observe this pattern among women . See Table S6 for the predicted number of all-cause mortality for each occupational group among those aged 55 to 59, which was calculated from the results of model 3. Table 1. Odds ratios for all-cause mortality associated with fixed parameters, along with the Deviance Information Criterion, Japan, 2005. --- Discussion To examine geographic inequalities in all-cause mortality in Japan, we used the 2005 vital statistics and census data. The present findings demonstrate the presence of substantial geographic variations in both sexes across 47 prefectures, even after adjusting for the composition of each prefecture. Adjusting for age and occupation, ORs for all-cause mortality ranged from 0.870 in Okinawa to 1.190 in Aomori in men, while they ranged from 0.864 in Shimane to 1.132 in Aichi in women. In other words, even when taking into account the differentials of compositions in each prefecture, the risk for allcause mortality varied by as much as 30% across prefectures. Subsequently, we used three different, but related prefecture-level socioeconomic status variables to examine their possible contextual effects -Gini coefficients for yearly income, average yearly income, and average savings. Although there was an indication of an inverse association between average savings and mortality among men aged less than 65 years, no clear patterns were observed for other prefecture-level variables. The patterns of geographic inequalities were relatively similar between nonmanual occupations and production and transport occupations, primarily among men. Previous studies from Japan have analyzed geographic inequalities in health by examining the relationship between area-level socioeconomic status and health outcomes in the corresponding areas, such as life expectancy and age-adjusted mortality rates [26][27][28][29][30][31][32][33]. These ecologic studies would be useful to document and monitor inequalities in health, showing the possible relationship between area-level deprivation and health. We should note, however, that the relevance of these studies is often limited since they cannot directly determine whether differences across areas are due to characteristics of the areas themselves or to differences between the characteristics of individuals residing in different areas [34]. We should also note that, due to ecologic fallacy [35], their findings cannot be necessarily extrapolated to the association between socioeconomic status and individual health. In this study, we employed a novel multilevel approach and used the results of the random part of multilevel models to examine the geographic inequalities in all-cause mortality, by simultaneously adjusting for composition and context [18]. Indeed, rather than seeing the random part of multilevel models as a nuisance in an attempt to identify the fixed effects, estimating variance would add substantive information into the boundaries of the collectives to which individuals belong [20][21][22]. In particular, the present study would be of great use to assess the relative contribution of composition and context to the geographic inequalities across 47 prefectures. In some prefectures, adjustment for age and occupation showed little change in ORs for mortality, which implies that their composition played only a minor role. For example, adjustment for age and occupation showed little change in ORs in Aomori in both sexes, and they remained significantly high. This result suggests that composition matters much less than context, implying a possibility of contextual detrimental determinant of health in the prefecture, e.g., economic, environmental, or social. Obviously, a possibility that this pattern emerges due to an omitted composition of the prefecture cannot be ruled out since the information about other indicators of composition was not available. It is notable, however, that we observed substantial attenuation in ORs when adjusting for age and occupation in Figure 4. Unadjusted and adjusted geographic inequalities in all-cause mortality among men, Japan, 2005. We show the overall geographic inequalities in all-cause mortality across 47 prefectures among men. Unadjusted and adjusted inequalities were estimated from null model and model 1, respectively. Prefecture-level residuals are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. doi:10.1371/journal.pone.0039876.g004 other prefectures. For example, in Akita , unadjusted ORs were remarkably high by approximately 20% in both sexes, whereas they moved toward the null after adjusting for age and occupation, and OR was no longer statistically significant in women. This finding indicates that composition in Akita played a significant role in lowering its health status in term of the risk for all-cause mortality. At the same time, the findings suggest that, once adjusting for its composition, the contextual effect in Akita is approximately equivalent to the grand mean of all prefectures, in terms of inequalities in all-cause mortality. To summarize, based on the present findings, we can weigh the impact of composition against the impact of context on the apparent pattern of geographic inequalities, which would provide a useful clue as to direct our attention toward more effective interventions. Notably, we observed qualitative changes before and after adjusting for age and occupation in some prefectures. In particular, in Chiba , Tokyo , and Aichi , although the adjusted prefecture-level ORs for mortality were significantly high in both sexes, they were apparently ''masked'' by their composition in unadjusted analyses -their unadjusted ORs were remarkably low by approximately 20%. This phenomenon would be explained as a result of skewed distributions of composition in these prefectures; the distribution is skewed to those who have a lower risk for mortality, which outweighs the ''negatives'' of the context in these prefectures . Notably, compared with manual workers, the risk for mortality was higher among upper non-manual workers , Figure 5. Unadjusted and adjusted geographic inequalities in all-cause mortality among women, Japan, 2005. We show the overall geographic inequalities in all-cause mortality across 47 prefectures among women. Unadjusted and adjusted inequalities were estimated from null model and model 1, respectively. Prefecture-level residuals are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. doi:10.1371/journal.pone.0039876.g005 which is different from the typical hierarchical pattern in industrialized western European and North American countries [3,4,6]. A recent study from Japan suggested that this remarkable pattern emerged among men following the collapse of the asset bubble in the early 1990s [9]. These discussions highlight the significance of examining the pattern of geographic inequalities in terms of composition and context, so that researchers can present its true picture. We explored the possible contextual effects of prefecture-level socioeconomic status by using three variables. Apparently, each indicator of area socioeconomic status may be tapping into different aspects of the social environment and may be differently associated with specific health outcomes [12]. Note that we examined them after adjusting for individual age and occupation, in contrast with previous ecologic studies [26][27][28][29][30][31][32][33]. A previous review suggested that the studies in income inequality are more These variations were calculated from model 2. All the differential tests of the variations were statistically significant, except for clerical, technical and managerial occupations vs. sales and service occupations among men , clerical, technical and managerial occupations vs. agriculture, forestry and fishery occupations among men , clerical, technical and managerial occupations vs. production and transport occupations among men , sales and service occupations vs. agriculture, forestry and fishery occupations among men , agriculture, forestry and fishery occupations vs. production and transport occupations among men , sales and service occupations vs. agriculture, forestry and fishery occupations among women , sales and service occupations vs. production and transport occupations among women , and agriculture, forestry and fishery occupations vs. production and transport occupations among women . b Non-employed includes the unemployed as well as the non-labor force. doi:10.1371/journal.pone.0039876.t004 supportive in large areas, e.g., states, regions, and metropolitan areas, because in that context income inequality serves as a measure of the scale of social stratification [36]. As has been noted previously [37], a prefecture is similar to a state in the United States in terms of its population size and variations in income inequality. Although we thoroughly investigated their possible effects, no clear patterns were observed except for an inverse association between average savings and mortality among men aged less than 65 years. We should note, however, that the measures of area socioeconomic status in this study provide only truncated information about the context of areas [38]. More importantly, we lacked information at the individual level on the socioeconomic variables measured at the prefecture level, i.e., household income and household savings, which precludes a rigorous examination of true causal operation at the prefecture level. Further studies are warranted to explore contextual effects in more detail by including a sufficient number of variables measured at the individual level. There are some limitations of this study. First, as a composition of each prefecture, only the information about sex, age, and occupation at the time of death were available. Occupations have been used as a dominant measure of socioeconomic position or occupational hazard, and researchers have been increasingly recognizing that occupation-based socioeconomic position may also reflect social networks [39]. Recent studies from Japan have indicated the significance of workplace social networks and social capital to health status among Japanese workers [40][41][42]. We should, however, note that occupations reflect only certain aspects of socioeconomic position, and in particular, the most appropriate way of defining socioeconomic position among women might not be occupation. To overcome this, we used the finest occupational classification available in the present data set, which could adjust for other omitted compositional variables . However, we should carefully interpret the findings among the group ''nonemployed'' because this group included the unemployed as well as the non-labor force. Second, the smallest geographic unit available was the prefecture, and we could not explore geographic inequalities in finer detail. Although the prefecture may be a useful and valid unit of analysis since it is the unit that has direct administrative authority in the economic, education, and health sectors [43], we should note that the choice of spatial unit can lead to different conclusions regarding the pattern of geographic inequalities [12,44,45]. Third, a possibility of numerator/denominator bias between the two sources of information cannot be ruled out. Although this type of measurement error may occur homogeneously across prefectures, it could exhibit varying degrees of adjustment if the person recording the notification of deaths tends to misclassify some specific occupations. In conclusion, the results of the present study demonstrate that geographic inequalities in all-cause mortality are not simply a passive reflection of composition in each prefecture. Indeed, the present findings suggest that the relative contribution of composition and context to health inequalities substantially vary across 47 prefectures, even between neighboring prefectures. Although we should note that compositional and contextual explanations are not mutually exclusive [46][47][48][49], the significance of context to human health cannot be over-emphasized [34,[50][51][52][53], and further attention should be given to evaluating their relative contribution to the pattern of geographic inequalities in other countries. Based on the present findings, future research is needed to understand the specific determinants of emerging geographic inequalities in Japan -either compositional or contextual. Table S6 Predicted number of all-cause mortality by each occupation group, Japan, 2005. --- Supporting Information Figure S1 Unadjusted and adjusted geographic inequalities in all-cause mortality among men, stratified by age groups, Japan, 2005. We show the overall geographic inequalities in all-cause mortality across 47 prefectures among men. Unadjusted and adjusted inequalities were estimated from null model and model 1, respectively. Prefecture-level residuals are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. Figure S2 Unadjusted and adjusted geographic inequalities in all-cause mortality among women, stratified by age groups, Japan, 2005. We show the overall geographic inequalities in all-cause mortality across 47 prefectures among women. Unadjusted and adjusted inequalities were estimated from null model and model 1, respectively. Prefecture-level residuals are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. Figure S3 Geographic inequalities in all-cause mortality by occupational groups among men, Japan, 2005. We show the geographic inequalities in all-cause mortality across 47 prefectures for the six aggregated occupational groups, conditional on individual age and occupation. Prefecture-level residuals from model 2 are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. Figure S4 Geographic inequalities in all-cause mortality by occupational groups among women, Japan, 2005. We show the geographic inequalities in all-cause mortality across 47 prefectures for the six aggregated occupational groups, conditional on individual age and occupation. Prefecture-level residuals from model 2 are described by odds ratios, with the reference being the grand mean of all prefectures. Prefectures with lower odds for mortality are blue, and those with higher odds are red. The prefectures with non-significant residuals are gray. Table S1 Description of data used for multilevel models analyzing all-cause mortality in 47 prefectures, Japan, 2005. --- Table S2 Detailed description of data used for multilevel models analyzing all-cause mortality in 47 prefectures, Japan, 2005. --- Table S3 Prefecture-level residuals for all-cause mortality by occupations among men, Japan, 2005. --- Table S4 Prefecture-level residuals for all-cause mortality by occupations among women, Japan, 2005.
Background: A recent study from Japan suggested that geographic inequalities in all-cause premature adult mortality have increased since 1995 in both sexes even after adjusting for individual age and occupation in 47 prefectures. Such variations can arise from compositional effects as well as contextual effects. In this study, we sought to further examine the emerging geographic inequalities in all-cause mortality, by exploring the relative contribution of composition and context in each prefecture. Methods: We used the 2005 vital statistics and census data among those aged 25 or older. The total number of decedents was 524,785 men and 455,863 women. We estimated gender-specific two-level logistic regression to model mortality risk as a function of age, occupation, and residence in 47 prefectures. Prefecture-level variance was used as an estimate of geographic inequalities in mortality, and prefectures were ranked by odds ratios (ORs), with the reference being the grand mean of all prefectures (value = 1). Results: Overall, the degree of geographic inequalities was more pronounced when we did not account for the composition (i.e., age and occupation) in each prefecture. Even after adjusting for the composition, however, substantial differences remained in mortality risk across prefectures with ORs ranging from 0.870 (Okinawa) to 1.190 (Aomori) for men and from 0.864 (Shimane) to 1.132 (Aichi) for women. In some prefectures (e.g., Aomori), adjustment for composition showed little change in ORs, while we observed substantial attenuation in ORs in other prefectures (e.g., Akita). We also observed qualitative changes in some prefectures (e.g., Tokyo). No clear associations were observed between prefecture-level socioeconomic status variables and the risk of mortality in either sex. Conclusions: Geographic disparities in mortality across prefectures are quite substantial and cannot be fully explained by differences in population composition. The relative contribution of composition and context to health inequalities considerably vary across prefectures.
Introduction Beyond the immediate health risks, the social and political consequences associated with the spread of SARS-CoV-2 are of great importance. 1 As researchers interested in cross-cultural political psychology, we want to explain the rise of authoritarianism, characterised by hostility towards other social groups and ethnocentrism, across the world in the wake of COVID-19. The aim is to provide an explanation for the global rise of conservative attitudes. Acknowledging that the current situation is but a snapshot in a still evolving crisis, these explanations are intended as an initial social scientific cut rather than a complete evaluation of the social consequences of the pandemic. Compared to the other recent SARS outbreak in 2003, COVID-19 is on a different scale. Thus far, the pandemic has had the effect of strengthening national borders and concomitant anti-foreign sentiments, which the world was already witnessing on account of Brexit and as part of the US-American executive policies. Similar social and political trends were also observed in Asia, where there had been reports of rising nationalism during the first half-year of the pandemic. Given these trends, is it reasonable to ask whether the global population will eventually find itself in a politically and socially closed-off world as a consequence of COVID-19? --- Ecological forces and us In answering this question, we first highlight the pathogen avoidance responses that humans have evolved in the course of a long evolutionary history. COVID-19 is a rich ground for activating these self-protective responses. Throughout the human evolutionary trajectory, pathogen stress created strong survival pressures culminating in the development of the immune system to ward off diseases . The immune system comprises the 'behavioural immune system' and the 'physiological immune system' . The former entails a complex set of evolved cognitive and behavioural mechanisms. These psychological mechanisms work to detect visible signs of infection and accelerate adaptive responses to pre-empt the negative impact of pathogen stress even before an infection has struck . While the PIS serves as the first line of defence against an infection, the BIS is activated by 'visible' cues, e.g., specific facial anomalies associated with infections . Often, these overt cues do not perfectly predict who carries a disease. As such, the BIS may 'overreact' and lead to false positive errors. Therefore, the activation of the BIS can have far-reaching implications for a host of psychological functions, including political preferences and social relationships . At any rate, the activation of the BIS, including any 'overreaction', is considered adaptive, as failing to detect a potential pathogen cue can have dire consequences . The implication is that the PIS takes a 'backseat' in situations where physiological immunity does not yet exist or is likely to be suppressed , which is the case with SARS-CoV-2. In such cases, the BIS takes the 'driving seat' in our self-defence mechanisms. --- Extending the ecological explanation A radical component of the BIS mechanism involves the exclusion of members of ethnic out-groups. As humans developed an exquisite immune system that is well-equipped to combat pathogens prevalent among the members of a particular social group or in-group, it proved adaptive to avoid any foreign or 'exotic' pathogens carried by out-group members. These evolved behavioural mechanisms, apt for pre-industrial, small-scale living conditions, are ill suited for dealing with the infectious threats of the modern world. This is because evolutionary forces lag behind the pace of industrialisation and have failed to fashion pathogen avoidance mechanisms that could be better suited to our contemporary connected world . At the group or societal level, pathogens have been found to co-vary globally with authoritarianism and exclusionary attitudes towards ethnic out-group members, because people prefer authoritative leaders to orchestrate collective social action in the service of 'group survival' . In line with this theorising, Hartman et al. documented a direct link between the perceived threat of COVID-19 and authoritarian and anti-immigrant attitudes based on a cross-sectional representative survey in the UK in March 2020. Previous studies show that people in the United States exhibited exclusionary attitudes towards immigrants after simply reading about the health risks associated with swine flu during the HINI pandemic and the 2014 Ebola outbreak . --- The proposal We invoke these theoretical premises and empirical findings to explain the rise of a pandemic of 'authoritarianism' and 'ethnocentrism' in tandem with COVID-19. The novelty of our proposal lies in the trade-off between virulence and transmissibility of SARS-CoV-2. The either/or relationship of virulence and transmissibility has important implications for the activation of the BIS and the ensuing social and political upheavals. Viruses tread a fine line between transmissibility and severity . Those that are too virulent will incapacitate or even kill infected hosts , which limits their ability to infect others in the long run. Those that are less harmful, on the other hand, will be successful in making copies of themselves and increase the scope of their transmissibility, which is likely to be the pertinent scenario with the impact of SARS-CoV-2 on otherwise 'healthy' hosts. The latter case also serves as an effective ground for the activation of BIS, while the former triggers the PIS. We therefore argue that the spread of SARS-CoV-2 is likely to be associated with the development of particular socio-political ideologies that hinge on the activation of the BIS for redrawing social boundaries between in-versus outgroups in our complex world. Whether this will lead to novel kinds of 'ethnophobia' or even 'theophobia' remains to be seen. Alternatively, we may witness new cooperation across societies concomitant with a weakening of the 'us' versus 'them' divide; a strategy that requires the inclusion of 'everyone' in a unified global attempt to fight the pathogen. Until the pandemic descends from its current global peak, we hope that this commentary serves as a 'turntable' for speculating about its political and social aftermath, and that this will involve the cooperation of researchers from a variety of disciplines. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Sitting on the fine line between pathogen 'transmissibility' and 'severity', the Behavioural Immune System (BIS) is responsible for activating behaviours that minimise infection risks and maximise fitness. To achieve self-preservation, the BIS also fuels social and political attitudes. We aim to explain societal changes that may be sparked by COVID-19 by highlighting links between human evolutionary history and our psychological faculties mediated by the BIS.
Introduction Australia is a country represented by diverse populations and communities, and this diversity is mirrored within Australian schooling systems amongst the student demographic. Despite this, historically Australian teachers in schools have a disproportionately homogenous workforce relative to the diverse student population commissioned by the Teacher Education Ministerial Advisory Group . Recommendations in the report suggested initial teacher education students undergo literacy and numeracy assessment to ensure that they are placed among the top 30% of the adult population in relation to literacy and numeracy. LANTITE is designed and administered by the not-for-profit organisation, the Australian Council for Educational Research . It is comprised of two tests, one for literacy and one for numeracy. Each test contains 65 questions which range from multiple choice to short answer style items. Both tests must each be completed in two hours and may be undertaken in a face-to-face testing location or online, via a remote invigilation service. Remote invigilation requires a reliable internet connection and a laptop/personal computer to complete the test. There are also other criteria relating to testing room requirements and testing delivery administration specific to the remote proctor arrangement. --- Review of literature --- High-stakes testing in teacher education Large-scale standardised testing and assessment is a well-established practice in education and there exists a significant body of research relating to high-stakes testing, most notably in the United States of America , Globally, these tests are utilised in various forms throughout formal schooling systems, and teacher education has become a particular focus in some countries, including Australia. It is generally agreed high-stakes testing of teachers is designed to ensure a particular standard is met in relation to having quality teachers in schools, to ensure teachers are able to teach to a required minimum standard and, c) where necessary, hold them accountable for their practice . Whilst governments often find high-stakes tests a potent form of public relations capital to ensure quality teaching and education in schooling systems, this notion of 'quality teaching' is problematic, and one Bahr and Mellor argue is difficult to define and subsequently to monitor. High-stakes testing has well documented concerns relating to unintended consequences in teacher education, including, curriculum narrowing ; the narrowing of educational experiences for students ; unintended barriers to entry into teaching by way of acting as a deterrent factor ; cultural bias in testing instruments ); testing anxiety ; the distortion of teaching practices and, cheating by both the student and the teacher . LANTITE is not immune to these negative consequences. The exacerbation of inequity and injustice resulting from high-stakes tests is a focus of this paper, particularly in the context of mounting calls for an increase in teacher diversity as a result of the recent Quality Initial Teacher Education Review . Many of the inherent issues of high-stakes tests have been well researched with the standardised test used in Australian schools, National Assessment Plan-Literacy and Numeracy . NAPLAN is also used as a dubious measure of teacher quality with individual school results being compared in public forums with limited context for the general public to draw their own, often uninformed, conclusions about teacher performance and school quality . --- LANTITE To date, there is limited existing research related to LANTITE. Much of the research focuses on analysis of the test content from literacy and numeracy perspectives , LANTITE's impact on teacher identity , unintended negative testing emotions and overall student experiences related to test-taking . Other initial studies into student perceptions of LANTITE call for further research into both student and stakeholder voices in relation to the impact of the tests . Limited published data is publicly available regarding nationwide success rates of students completing the test, however, Barnes & Cross, 2020 provide a compelling argument regarding the effectiveness of LANTITE excluding students who cannot meet LANTITE's requirements. Their study highlights on first attempts the pass rate of the literacy component is 90.2% and 95.6% for the numeracy component. These figures are in line with the 2015 pilot LANTITE test results of a 90-95% pass rate . --- Diversity To further explore the notion of diversity, it is important to first define what we mean by diversity. Historically, definitions of diversity have been limited to interpretations of race, ethnicity, or religious beliefs. However, over time the definition has expanded and come to include difference in sex and gender; disability; beliefs and religion; social classes and socio-economic status; as well as race, ethnicity and culture . In this paper it is this broader understanding of diversity that we embrace, acknowledging that one, several or all these categories may affect an individual of students from interacting in a school or learning environment . Whilst much is written about diversity and education internationally, little is known about what specifically contributes to the lack of teacher diversity in the workforce . This, therefore, makes it a complex issue to understand and ultimately address and rectify. However, it is clear from what literature that does exist, representation of a diverse teacher workforce is important for students in educational settings . Findings from these studies suggest positive 'role model' effects in terms of student school attendance and overall learning gains. Furthermore, Dee finds racial and ethnic dynamics influence teachers' perceptions of student performance and one of the ways to address this is to train and recruit more underrepresented teachers. Given that the recruitment of pre-service teachers from diverse backgrounds will involve students who are likely to be 'First-in-Family' to attend university, additional study and pastoral supports are necessary to support their transition and success at university . --- Teacher diversity in Australia? Traditionally, teachers in the formal Australian school system have been predominantly Anglo-Australian. The introduction of a British school system shortly after Australia's colonisation ignored education and learning which had occurred uninterrupted for thousands of years amongst First Nations people . In its early infancy, teacher education in Australia was largely restricted to British migrants, particularly clergy aligned with Christian faiths . Fast forward over 200 years and the nation's teacher workforce is still largely homogenous and does not represent the many facets of diversity including culture, disability and socio-economic status . According to the Australian Bureau of Statistics , in 2021 29.1% of Australia's population were born overseas 2022). Of those born overseas, the top 4 countries of birth included, England, India, China, and New Zealand 2022). The Australian Institute of Health and Welfare identified in 2019 that 5.1% of 15-24 year olds identify as Indigenous Australians 2021). According to the same Australian Bureau of Statistics 2019data, 9.3% of young people had a disability 2021). These statistics are worth reflecting on especially when set against those related to wider teaching population in Australia. The Australian Institute for Teaching and School Leadership reported in 2018 on diversity in school leadership and teaching. Their findings revealed that only 8.9% of primary teachers and 10.8% secondary teachers identified as speaking a language other than English at home . Only 1.1% of Primary teachers and 0.8% of Secondary teachers identified as Indigenous Australians . These figures are substantially less than the overall population of Australia, with 21% of Australia's population speaking a language other than English at home and 3.3% identifying as an Indigenous Australian . Interestingly, there is an absence of data on Australian teachers with a disability, however, in Australia approximately 18% of the population identifies as having a disability , 9.3% of 15-24 year old's have a disability , 2021), and 7.7% of children under the age of 15 have a disability , 2019). These figures indicate a teacher population which does not represent the diversity of Australia's population, and more specifically, Australia's youth. A 2022 Department of Education, Skills and Employment report raised concerns about whether the initial teacher education pipeline reflects the diversity of the school student population . Interestingly, this report offered minimal recommendations to address the imbalance of diverse representation in the teaching workforce but does suggest the need to support access and success of ITE students from diverse cohorts. It also acknowledges that a general approach to attracting diverse cohorts will not be effective and that "targeted incentives are needed for different cohorts" . Diversity and the need to embrace it within schooling systems is clearly articulated through the AITSL standards. The Graduate Teacher Standards specify a number of areas where graduate teachers are required to explicitly demonstrate practice and expertise in relation to the following Standards; 1.3 Students with diverse linguistic, cultural, religious and socioeconomic backgrounds; 1.4 Strategies for teaching Aboriginal and Torres Strait Islander students; 1.6 Strategies to support full participation of students with disability; 1.5 Differentiate teaching to meet the specific learning needs of students across the full range of abilities; 4.1 Support student participation; and 2.4 Understand and respect Aboriginal and Torres Strait Islander people to promote reconciliation between Indigenous and non-Indigenous Australians . Dee argues that a diverse teacher population is important to the health and functioning of school systems. Representation matters in education, particularly for populations who may be marginalised or socially disadvantaged . Few studies in Australia explore the impact teacher representation has on its students, however there are other studies which focus on the experiences of being an Indigenous teacher in Australian classrooms. Hogarth shares these stories and some of the demands placed upon Indigenous teachers including the 'tug of war' these teachers experience fulfilling the dual role of teacher conforming to school expectations whilst being a member of and advocate for Indigenous communities . Studies in the United States of America have identified students who have a teacher of the same race are likely to improve student test scores , this was particularly evident for African American students who had an African American teacher. Findings related to improved test-scores have also been replicated in other more recent studies . Whilst these studies focus on student test-scores to highlight the benefits of teacher diversity on students from diverse backgrounds, there are other benefits unrelated to testing outcomes, including positive role modelling and being change agents higher expectations of their students , subjective evaluations , behaviour management , and increased levels of motivation and engagement . Diversity in the teaching workforce fulfills several important functions. Firstly, teachers from diverse backgrounds have cited feeling the importance of their employment in role modelling and becoming change agents for students who may not have otherwise been academically aspirational . Secondly, some studies highlight teachers from diverse backgrounds are more likely to have higher expectations of their students, particularly students who may themselves come from diverse backgrounds. Dee suggests that this is due to the removal of stereotype threats and unconscious bias which may potentially occur in other classrooms. Finally, teachers from diverse backgrounds also have the benefit of being able to draw on their own cultural contexts or lived experiences to select appropriate instructional strategies or interpret behaviours from students from similar backgrounds . Whilst this study does not specifically identify the number of students from diverse contexts who have successfully met the standard for LANTITE, it is important to consider and, if data was to become available, establish the existence of any disproportionality, and the potential for overrepresentation of Indigenous Australians, CALD and students with disability who may be experiencing challenges related to accessing and succeeding in these tests. Given the comments made in the Quality Initial Teacher Education Review , an over-representation of students from diverse communities not meeting the LANTITE standard would be an unintended impact on the future teacher workforce. --- Method The qualitative data shared in this paper were derived from a larger mixed methods study designed to help better understand student experiences of LANTITE. Data in the overarching study were gathered in two simultaneous phases and involved an online quantitative survey and in-depth interviews, which helped ensure flexibility and allowed for a deeper, richer elicitation of the diverse experiences and are outlined below. --- Research design A sequential mixed methods research design involving two phases was used. Both Phase One and Phase Two involved two distinct participant groups, ITE students and ITE stakeholders. At the end of the Phase One, participants from both groups, were invited to opt-in to Phase Two, a semi-structured telephone interview. Figure 1 provides an overview of the design of the sequential mixed methods approach undertaken and the relationship of this study to the overall study. The focus of this paper provides purely qualitative data as it offers a deeper and richer exploration of student experiences. Qualitative methods allow the researcher to deeply explore the real-world events and experiences students have in relation to test-taking. The research design is centred in providing a voice to students with students and stakeholders creating their own meaning from their individual experiences. Similar studies privileging student voice have been undertaken in Australia in relation to a similar high-stakes test, NAPLAN, to explore and give voice to test-takers . --- Time --- Phase One Online Questionnaire ITE student --- --- Limitations It is acknowledged that this study was limited to volunteers and therefore susceptible to a potential bias towards students who held strong beliefs about LANTITE. The study is also limited by students who disclosed their diversity status; it is possible other students from diverse backgrounds participated in the study but did not disclose their diversity . Caution therefore should be exercised in claiming any generalisations from this study. Despite this, the varied experiences of research participants provide valuable insights into helping us better understand the test taking process and reveal several unique additional hurdles and challenges experienced by students from diverse backgrounds. --- Results and analysis The results below are based on a thematic analysis of 10 students and 12 ITE stakeholders who identified issues and concerns relating to students from diverse backgrounds. These themes are further explained by using key illustrative quotes from the participants. Qualitative responses from both Phase 1 and 2 were combined and analysed using NVIVO12 software. These responses were coded into overall themes and subthemes using the conventions of Cresswell's thematic content analysis. An inductive process was used to identify key themes initially and to develop a provisional list of codes . From here a codes-to-theory model was utilised to allow for themes to emerge from the data. Table 1 provides an overview of these themes and subthemes identified by ITE stakeholders and students. In some cases, a sub-theme may have been mentioned several times by the one participant. All quotations from stakeholders and students have been provided a pseudonym to respect their individual experiences and privacy but are importantly provided identity with the use of a name rather than a number. --- Unintended consequences The theme of unintended consequences was identified on 40 occasions, the highest number of responses were recorded in the sub-theme of adverse events, which were recorded 20 times. These adverse events included experiences of emotional distress, panic attacks, and feelings of shame for not having met the required standard for the tests. In one case, an ITE student identified five separate adverse events related to experiences of undertaking the tests and these have been counted as separate adverse events in Table 1. The negative impact on mental health and wellbeing were also recorded on 14 occasions throughout the data. For students who have multiple unsuccessful attempts, they describe an escalation in poor mental health and overall wellbeing with each unsuccessful test attempt. Many of these students from diverse backgrounds were on a re-attempt and disclosed the fall-out of this on their mental health and wellbeing. "It is the most stressful thing I have ever done, and I am not a stressed-out person. That's what gets me, and I would say that my numeracy would be quite high, but I think now that the pressure I am undergoing for my 5th attempt. In the last test I was so stressed I was almost vomiting; I could not calm myself down" . One student contacted the researchers after the study was complete to share that she was successful on her final attempt and is now teaching. However, the trauma from multiple attempts at the tests was still present long after the test attempt. Traumatic and adverse events in high-stakes testing have been previously documented Test access barriers 4 "The ones who have no hope of attending a testing centre because they live really remote, they elect for the online and remote proctoring and that has been very problematic. There has been a lot of problems with the robustness of the connections, you know dropping out part way through, those kinds of things" Reasonable adjustment application complexity 3 "I'm 46, I think they diagnosed everyone with dyslexia when you had a learning difficulty and there was no paperwork… all of the paperwork was sent to the school… I am in the process of doing it through the university. Because I am going to need it. I am going to need the time" in NAPLAN testing of Australian school children . --- Additional hurdles Both ITE students and stakeholders identified additional hurdles students from diverse backgrounds face in accessing adjustments on 35 occasions. These additional hurdles included the collection of additional documentation for reasonable adjustments, additional financial costs and the narrowing offering of testing adjustments. Many of these hurdles, such as providing evidence of their disability or medical condition can have multiple impacts on the student and their capacity to be successful. ITE stakeholders and students in this study identified concerns relating to the test design and the test implementation and its greater impact on students from diverse communities. Some of these concerns related to difficulties for some students in accessing testing locations, this is particularly significant for students in remote locations as was identified in the data on 4 occasions. The option of attending a face-to-face testing centre is not available to them and as a result they are required to utilise, at times, unreliable internet services. Broader consequences of standardised testing can vary greatly with participants, however some of the ITE stakeholders, like Simone, question the validity of the test and raise concerns relating to cultural bias. "I have concerns about the comprehension for Indigenous Australian students because the test is culturally biased. Even I tried to do it and there were instances where there were two answers which I thought were right, and you can only choose one! But for them, the inference is culturally biased." Several students and teacher educators highlighted some groups of students from diverse backgrounds have had limited formal test environment experience . These are students who will have achieved educational outcomes via alternative pathways which have not included implicit and explicit formal examination preparation technique. Students recruited via non-traditional pathways are typically first-in-family and are also from underrepresented communities requiring additional support to bridge the disconnect between traditional and alternate pathways. --- Discussion The aim of this study was to explore student and stakeholder experiences relating to teacher diversity and undertaking LANTITE. There were two clear findings from this study, unintended consequences of standardised testing, and the additional hurdles for students from diverse contexts. The benefits of having teacher diversity in the workforce and education system more broadly are clear-students from diverse backgrounds benefit academically, socially and emotionally from diverse teacher representations in the classroom . An added benefit is that students from dominant cultural backgrounds also gain from having teachers from different social and cultural backgrounds to their own. Therefore, cultivating pathways and limiting barriers for diverse teachers is important, especially given Australia's increasingly multicultural population and commitment to reconciliation with its First Nations. --- Unintended consequences of standardised testing Unintended impacts of high-stakes testing have been well documented internationally particularly in relation to cultural bias in test design ); curriculum narrowing , negative affective responses , and cheating . Cautions of LANTITE delivering similar unintentional consequences has been echoed in previous research particularly in relation to Indigenous Australian students , mental health , and diminished confidence in overall personal literacy and numeracy . A range of unintended consequences of standardised testing were recounted in this study, including adverse events for students, manifesting in panic-attacks and managing the prolonged effects of anxiety. In extreme cases, some students were denied access to their existing employment due to grandfathering of LANTITE requirements into courses. These requirements resulted in students with provisional licenses to teach having their employment revoked. The Standards for Educational and Psychological Testing highlight that those who mandate high-stakes tests should be aware of negative consequences, such as unintentional exclusion or testing biases. Furthermore, the standards also recommend data and information regarding unintended consequences should be collected and considered as part of ongoing evaluation of the testing process ). It is unknown if these details are being collected by the testing authority from students or universities. Whilst a private firm was commissioned by DESE in 2019 and 2020 to consult with universities and students on LANTITE, recommendations have not been made widely available and the parameters of this study did not specifically mention unintended consequences or impacts of the tests on students. Given many students from diverse contexts are 'First-in-family', strategies and support systems for these cohorts could be considered, including encouraging help seeking behaviours and specific mentoring support to develop positive study behaviours and mental health support to assist them with feelings of belonging and the invisible struggles in defending decisions with family to pursue university studies . --- Additional hurdles for students from diverse contexts A range of additional hurdles for students from diverse contexts were raised by both ITE students and stakeholders. To obtain a reasonable adjustment to the testing conditions students who have a disability or health-related needs must "…produce evidence through documentation that is no more than 1 year old. Documentation older than one year will be acceptable if it is accompanied by an acceptable update from your medical/health practitioner" . For students enrolled in a four-year undergraduate program, this may require them to attend additional medical appointments to obtain repetitive documentation previously supplied to universities. For several of the students in this study, their disability was diagnosed a long time ago . As a result, they did not possess diagnosis letters or documentation prepared within the required ACER timeframes. Whilst additional medical documentation and evidence is needed in order to support an application for reasonable adjustment, consideration of financial costs and the additional time burden incurred by students to obtain these is a further complication in their testing experience. This could be alleviated by lengthening the time of validity in which medical documentation for students with disability/medical conditions have from 1 year to 5 years in line with the documentation validity for students with learning disabilities. Adjustments are also only available to students who have a medical condition or disability. Therefore, students who have cultural requirements or require support because of their diverse backgrounds cannot access an adjustment opportunity. In the current climate of teacher shortages, some ITE stakeholders in the study expressed concern about the negative effects the implementation of LANTITE has had on attracting students to the profession, especially those from diverse backgrounds. Concerns relating to difficulties in future recruitment of Indigenous Australian teachers because of LANTITE were foreshadowed by van Gelderen although there has been no specific study to date exploring recruitment impacts due to LANTITE. ITE students from diverse contexts typically have faced additional hurdles gaining admission, progressing, and graduating from ITE qualifications . As highlighted previously many students from diverse communities are firstin-family, bringing unique challenges requiring targeted support from higher education providers to support their transition, study and ultimate success as a graduate . Considerations for first-in-family participants could be offered by ACER in the form of suitable testing adjustments and support. For some ITE students, who may experience social or financial hardship, accessing appropriate technology and equipment to undertake the tests can be problematic. Reliable internet connectivity can be a considerable stressor for students who live in remote or regional communities. For some of these students, further difficulties related to living in multi-generational housing arrangements, living in a sharehouse, or having young children in the home added to the challenge of sitting the test at home via remote proctor. Students from diverse communities are disproportionally overrepresented as being first-in-family and may have further family pressures and cultural expectations to successfully perform the first time. This performance pressure can be created by the expectation of being and becoming a positive role model or 'slipstreaming' within their communities . When failure or obstacles are encountered, it is not only the student who experiences the failure, but often the whole family or community who share in the experience. One student who had additional pressures of having received a scholarship and an offer of employment, was unable to pass LANTITE. This scenario created a situation of disappointment and conflict with her family and community. Further, the student experienced feelings of shame in not having lived up to the course requirements and community expectations. --- Implications The Standards for Educational and Psychological Testing highlight the importance of having common principles and guidelines when responding to test-taker characteristics. This however should be balanced with sensitive and individual characteristics. If reasonable adjustments are not being made for individual need, then the validity of the test-score could be compromised , 2014). The application of reasonable adjustment with respect to LANTITE focuses on disability and health related needs . However, considerations for students who have cultural or socio-economic considerations are not factored in to the LANTITE testing adjustments. Both ITE stakeholders and students in this study identified concerns relating to students from backgrounds where EALD and/or Indigenous Australian students from within these communities can have quite different, but specific needs. As described by Anastasia, "Aboriginal and Torres Strait Islander students [from remote communities] have told me, the process of going and sitting in the testing centres is quite daunting for them." Consideration of culturally appropriate adjustments or the provision of additional time in the testing environment may result in students being able to demonstrate their performance against the standards more accurately. Consideration could be given to offering a different testing process or specific support to these students as it may assist their success, for example for some First Nations peoples, an oral assessment component may be more appropriate as this may allow the student to provide the cultural context in their response . Some of the tensions related to adjustment could potentially be eliminated through the introduction of principles related to Universal Design for Learning in higher education which also has the potential to benefit the wider community specifically in the areas of representation, action and expression . Finally, many of the teacher educators in this study have been firm in their advocacy for students, particularly those from diverse backgrounds undertaking LANTITE. ITE stakeholders in this study described the additional support they offer to students from diverse backgrounds, in some cases, is deemed to be against university policy. Whilst devolving responsibility to universities for test administration is problematic in terms of funding and resourcing, the precedent for this devolved responsibility exists with other aspects nationally mandated course requirements including teacher performance assessments and pre-entry screening and selection requirements . As the climate of teacher shortage intensifies both globally and in Australia, and the diversity of school populations flourishes, unintended barriers to progression and completion of ITE programs should be closely monitored to ensure teacher diversity is not adversely impacted. Future research could specifically explore LANTITE's impact on teacher diversity or its impact on prospective students considering a career in teaching. --- Data availability Dataset and materials are held by the authors as per the details relating to the Human Ethics Approval for the data collected. The data and material is not publicly available and has been stored in accordance with the Human Ethics Approval of Murdoch University Ethics Committee. --- Author contributions Both Dr AH and Dr RS make the following declaration in relation to contribution. We have both made substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data; or the creation of new software used in the work; We have both drafted the work or revised it critically for important intellectual content; We both approved the version to be published; and We both agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Funding Open Access funding enabled and organized by CAUL and its Member Institutions. Both authors declare that this research did not attract any funding or Grants. --- ---
Australian schools are diverse, and support students from a wide range of racial, cultural, and linguistic backgrounds, as well as students with disability. Ironically, efforts to ensure equally diverse teacher workforces have been ineffective. Attempts to improve broader representation in teachers have been hampered by a homogenous approach to teacher recruitment and education. In 2016, Literacy and Numeracy Test for Initial Teacher Education (LANTITE) became a graduation requirement for teachers. The aim of this research is to explore the test-taking experiences of students (pre-service teachers) from diverse backgrounds, and the stakeholders who support them. A thematic analysis of data from a larger mixed methods study revealed additional tensions for students from diverse backgrounds including unintended consequences such as traumatic experiences and having to encounter additional hurdles to be successful. This study provides unique insights into additional pressures and hurdles students from diverse backgrounds experience when completing this high-stakes test.
Introduction Nepal is one of the world's 20 most disaster-prone countries . Every year in Nepal hundreds of people die from natural hazards. Ever since the first recorded earthquake of 1255 AD that killed one-third of the population of the Kathmandu Valley, the last great earthquake in 1934 AD resulted in more than 10,000 deaths in the Kathmandu Valley. Most of the infrastructures and major heritage sites had to be rebuilt. There have since been earthquakes causing severe human and physical loss in 1980in , 1988in and 2011in B.S.). On Saturday, 25 April 2015 at 11:56 local time, a 7.6 magnitude earthquake as recorded by Nepal's National Seismological Centre , struck Barpak in Gorkha district in Nepal. 8,848 people were dead, 22,307 got injured with 868,042 houses being collapsed due to the great earthquake . It is estimated that the lives of eight million people, almost one-third of the population of Nepal, have been impacted by the earthquake. Thirty-one of the country's 75 districts had been affected, out of which 14 were declared 'crisis-hit' . The destruction was widespread covering private and public buildings, heritage, schools and health posts, road ways, bridges, water supply system, and hydropower. The subsistence-based households are badly affected in the rural areas by earthquake. Not only the earthquake destroyed physical infrastructures, it affected the family structure and relationships. Basically, the joint families in the rural areas have been split into nuclear families. The earthquake has been proven as the responsible factor for such family dynamics. This article is based on the effects of 2015 devastating earthquake on families of Nepalese society. The primary aim of this study was to investigate the influence of post-disaster government policies on family dynamics in Nepal. To accomplish this, I gathered qualitative data through fieldwork, employing an exploratory qualitative approach. The primary data were procured through a combination of interviews, observations, and Key Informants Interviews . In addition, I incorporated secondary data from various sources, including journals, books, government documents, and newspapers. In this study, we sought to gather the necessary information to address the research question posed. To do so, data were systematically collected from individuals residing in the disaster-affected region of Sanichaur1 , located within Ramechhap District. To facilitate data collection in an organized manner, a comprehensive checklist was meticulously prepared for use during fieldwork. This checklist served as a valuable tool in capturing a wide range of relevant information. Following data collection, a rigorous qualitative analysis was conducted to interpret and describe the insights derived from the gathered qualitative information. This analysis provided a deeper understanding of the factors at play within the study area and contributed to addressing the research question in a meaningful way. --- Understanding Disaster and Family Disaster is a socially disruptive event which causes physical and social harm . Disaster is such an event which incurs physical damage and losses and/or disruption of their routine functioning. Both the causes and the effects of these events are related to the social structures and processes of societies or their subunits. Disaster can be viewed as being created by hazardous fleeting events like earthquake, hurricane that disrupts everyday routines . Quarantelli defines disaster as a social phenomenon. According to him disaster is socially constructed and rooted in the social structure of the community affected by a natural hazard. Similarly, Simmel argues that disaster is an event that can be designated in time and space which have impacts on social units. The social units enact responses that are related to these impacts. The social scientists argue that disaster is defined by human beings not by nature, not every windstorm, earth-tremors or rush of water is catastrophe, if there are no any serious injuries or death with other serious losses . Calamities are natural but disaster are social . Disaster that results in a huge loss of life, assets and livelihood are instead socially created. Historically, ideas about disasters have gone through three important phases. Traditionally, catastrophes were attributed to the supernatural. They were characterized as 'Acts of God'. The rise of Enlightenment secularism led to an important shift in the way society conceptualized disasters. The development of science as the new source of knowledge altered people's perception of disaster. They were increasingly seen as 'Acts of Nature'. In recent times, the view that disasters are caused by 'Acts of Nature' has been gradually, displaced by the idea that they resulted from the 'Acts of Human . Disaster can be defined as the events that cause the loss of many lives and property that has been owned by the human beings in whom the cause may be itself human or the nature. Social scientists say that disaster is defined by human beings not by nature. For example, natural calamities like earthquake, landslide, volcanic eruption, flood etc. can occur in nature frequently but unless it affects the human being it cannot be called disaster. Although events such as hurricane, flood and earthquake serve as trigger for disaster, disaster themselves originate in social conditions and processes that may be far removed from events themselves, such as deforestation, environmental degradation, factors that encourage settlements in hazardous areas, poverty and other forms of social inequality, low capacity for self-help among subgroup within population and failure in physical and social protective systems . The family has been often regarded as the cornerstone of a society. There are various forms of family--the universal basis of all human societies. The family has been seen as a universal social institution, as an inevitable part of human society . In a study entitled social structure, Murdock has examined the institution of the family in a wide range of societies. The family is a social group characterized by common residence, economic cooperation and reproduction. It includes adults of both sexes, at least two of whom maintain a socially approved sexual relationship and one or more children, own or adopted, of the sexually cohabiting adults . The family is the basic unit of social organization and it is difficult to imagine how human society could function without it. The family has been seen as a universal social institution; an inevitable part of human society. According to Burgess and Lock , the family is a group of persons united by ties of marriage, blood or adoption constituting a single household interacting with each other in their respective social role of husband and wife, mother and father, brother and sister creating a common culture. Ogburn and Nimkoff say that family is a more or less durable association of husband and wife with or without child or of a man or woman alone with children. According to Maclver and Page family is a group defined by sex relationships sufficiently precise and enduring to provide for the procreation and upbringing of children. They also describe family as a group of persons whose relations to one another are based upon consanguinity and who are therefore kin to one another. Malinowski argues that the family is the institution within which the cultural traditions of a society are handed over to a newer generation. The family is one of the few social structures which exist in all cultures and societies. It is a universal group which represents cultural continuity and tradition, a group which is said to fulfill important social functions such as introduction through birth and socialization of new members into society. There are different types of family in the universe. The small family unit, often referred to as the nuclear family, typically consists of two parents residing with their biological children under one roof. However, it's important to acknowledge that the definition of a nuclear family can sometimes be nuanced. In some cases, a nuclear family might encompass more members than a traditional joint family, especially when a couple has a larger number of offspring, resulting in a household with, for instance, six children, which would total eight members. Conversely, one can also encounter joint families with fewer members than this larger nuclear family configuration. As sociologists, it is beneficial to employ a more precise differentiation, taking into account primary and secondary/tertiary kinship ties. In this context, we can categorize a nuclear family as one consisting solely of primary kin, while a joint family incorporates both primary and secondary/tertiary kin. This distinction offers a clearer understanding of family structures and dynamics. In practice, nuclear families are often prevalent in urban areas, where households tend to be smaller and more nuclear in nature. On the other hand, joint families are more commonly observed in rural settings, where extended family members such as grandparents, aunts, uncles, and cousins frequently share the same household. It's worth noting that in traditional societies, the majority of individuals are affiliated with joint families due to the strong emphasis on extended kinship networks. --- Impact of disaster in family relation --- Disasters create complex changes in interpersonal dynamics within family members. There is strong consensus that post disaster family functioning is an important factor explaining variability in the psychological distress of their members. When disasters impact entire families, coping becomes a fundamentally collective process. One aspect of family functioning that may interfere with adjustment is unwillingness to share feelings and reactions about the disaster . Disasters also may lead to change in dynamics and structure within the family. Cohan and Cole examined changes in various social circumstances in the state of South Carolina, USA during a period that encompassed Hurricane Hugo. Similarly, Hutchins & Norris identified that the exposed survivors were more likely to report a new conflict with extended family than were non exposed survivors. Family conflicts and negative atmosphere have been related to higher levels child and adolescent disaster survivors . --- Study Area Ramechhap was among the most affected districts by earthquake 2015. According to the Population Census 2011, there are 43,883 households in the District with a total population of 202,646 . According to the 2011 Nepal census, Ramechhap Municipality had a population of 28,612 in 6,126 individual households . 99 percent houses were destroyed due to earthquake 2015 in Ramechhap. Regarding the data in Sanichaur, a total of 13 people were injured, 1 child died with 57 houses being destroyed. Sanichaur is a rural place with basic transportation facility. Majority of population is based on agricultural livelihood. According to 2011 census, there were 67 households in this village among which 17 joint families before earthquake were randomly selected for this study. 11 households are from Brahmin community, 4 from Janajati and 2 from Dalit community. During the study, one third gender was found. Before earthquake there were 14 male and 3 female head of households. The local teachers and representatives of political parties were interviewed and their views on effect of earthquake on family structures have been included in this report. The field visit was carried out in Sanichaur village in May and June of 2019. The effects of earthquake on families discussed in this report are basically based on primary data. The case study, recollection, narrative and observational approach have been adopted for analyzing and proving the effects of earthquake on families ethnographically. Similarly, many secondary data have been collected as supporting facts. At least three generations from the same households had same kitchen before the earthquake. Majority of people are literate because there is secondary school in the village and almost all people attended school. Almost all the people had agricultural livelihood, although some of them are government employees. Among 17 households, 9 houses have at least one person in government job. The job included Nepal Police, Nepal Army, teacher and government officials. Of the total population in the study households, 19 youth were out to the village for study purpose. Only 2 people had been to foreign countries in search of employment. --- Breaking the Family: Joint to Nuclear There was the practice of joint family system in Sanichaur village. Most of the households constituted the three generation. The government of Nepal announced a relief grant of 2 lakh rupees to each household with their houses being destroyed in the earthquake. This was the provision of the Operational Procedure 2073. Later, the amount was increased to 3 lakh rupees. This was to be distributed once for each household whose house was completely destroyed. It was also stated in the directives that the household with multiple houses would not get grant if one of their houses was not destroyed. This provision was highly politicized later. The provision was amended and the all-party mechanism could prepare name list with their recommendation and then the grant was distributed. All the families in Sanichaur separated for receiving the grant amount. Patan Pragya ISSN 18 Those who have migrated to the cities have built houses back in their village. In the two households under study, it was found that the husband and wife received two grants by getting divorced.The number of households increased from 17 to 43 after the earthquake. The numbers of head of the family before and after earthquake are enlisted in the table 1 below: The family structure of Sanichaur village has changed drastically after the earthquake. Of the 17 households, 13 households were joint families. Only 3 households were nuclear families. They shared the common kitchen and had common lawns and yards. They worked collectively at the farm, kept income in one basket and the head of household managed their expenditure collectively. Of the 13 joint families 13 male children had been to the district headquarter, Kathmandu or other places for study and employment. --- Source: Population Census and Field Study 2019 There were only 7 households in 2058 which increased to 14 after 10 years. The 14 households have increased to 17 after two years in 2070. But this number increased to 43 after only two years. The household number has increased dramatically but the population has not increased in the similar ratio. The total population was 45 in 2058 and this increased to only 77 in twenty years. Source: Field study 2019 The table 3 above clarifies the age-wise distribution of population in which age group 40-60 years has greatest number followed by age group above 60 years. Ramechhap is one of the 14 crisis hit districts in 2015. The earthquake had done great destruction in this district. Majority of houses had been destroyed. The table below is the list of physical destruction by earthquake in the village: Source: Field study 2019 The table clarifies the total number of houses before the earthquake. --- Expectation of relief and fragmentation of family The Government of Nepal made a decision of distributing an amount of 15 thousand rupees to each largely affected family right after the earthquake. The local government entities lacked the peoples' representatives at that time and thus all-party mechanisms were established who would recommend people for getting that relief amount. The all-party mechanisms went house to house to collect list of people and made recommendation. People in the village believed that the Government would distribute the grant for house reconstruction on the basis of the same list. Thus people requested the party cadres whom they were close to for enlisting their name as staying separately at home. People gave them false information that they were living separately under the same roof with different kitchen and they forced them to recommend for house grant. A local in the village says: We are three brothers staying together with our parents and our children in the same house before earthquake. When the Government announced about the house grant, we filled the separate information and showed that we have 4 households altogether. All of our four households received house grants separately. Patan Pragya ISSN 2594-3278 20 During the earthquake, the local government bodies lacked the peoples' representatives. Thus the initial data collection was done by forming the all-party mechanisms. The party representatives made recommendations to those who were affiliated to their parties or who were near to them. In doing so they recommended grants for almost all members of family as them being separate households. A single family before earthquake changed to 4 different households. A local teacher says: The all-party mechanisms falsely made recommendations for the families who supported their party. All members were enlisted as a different household just for receiving the grant. The Government of Nepal announced an opportunity to enlist names that were initially missing from the first list. During this phase, several couples registered their names separately, claiming that they had already obtained a divorce. However, it should be noted that it remains uncertain whether the Village Development Committee has the authority to issue divorce certificates. In our study of 17 households, we identified a concerning trend where two households had fraudulently received grants from the government. These households consisted of husbands and wives who falsely reported being divorced. During our field study, we encountered a person who claimed to be a divorcee, but upon further investigation, it was revealed that they were still married. The husband had fabricated a document to show their divorce solely for the purpose of receiving government grants. Consequently, they received separate grant amounts and constructed two houses side by side, although they continued to live together as a married couple. A divorcee found during field study said: We are not divorced in real. My husband made a document showing our divorce just to receive the grant from the Government. We received the grant amount separately and built the two houses joining together. But we live together. The earthquake has brought change on the ownership over land. According to the law of Nepal, only sons have right towards parents' properties. There is no provision of providing this to daughters. Thus there is practice in the rural areas of Nepal for unmarried daughter to stay together with their parents or brothers and rely on everything with them. But after earthquake the ownership of daughters over land have been established. Out of the 14 households included in our study, we found that in 7 of these households, parents had legally transferred a portion of their property to their daughters in order to qualify for the government grant. One parent, who had transferred property to their daughter and received the grant on her behalf, shared the following perspective: My daughter is 19 years old and currently unmarried. When the government announced the grant program, I took the necessary steps to legally transfer a portion of our property to her name. This legal documentation allowed us to apply for and receive the grant on her behalf as well. . --- Role of Government Policy The GoN has established National Reconstruction Authority on December 25, 2015. It is the legally mandated agency for managing the earthquake recovery and reconstruction in Nepal. NRA provides strategic guidance to identify and address to the priorities for recovery and reconstruction, taking into account both urgent needs as well as those of a medium-to long-term nature. The operational procedure of reconstruction of earthquake destroyed private houses 2073 prepared by NRA has made some provisions for relief packages distribution to the earthquake affected people. According to this, the households who have passed their legal separation from respective District Land Revenue Office before 2072, Baisakh would only receive the grants separately. The provisions of the directives were later amended time and again. In the meantime, the NRA formed a policy of providing grant amount to those possessing their own land. Also, the grant was for reconstruction of destroyed houses only. Though, bounded by political pressure and other unseen reasons NRA made an amendment to this provision. It accepted all the recommendations of the all-party mechanisms. Thus, families with one or more members went on being formed. --- Conclusion The subsistence-based households are badly affected in the rural areas by earthquake. Similarly, the earthquake not only destroyed physical structures, but also family structures and relationships. The inquiry into whether family structure and relationships disintegrate involuntarily or voluntarily presents a web of intricate considerations. Throughout our research, we have illuminated the nuanced nature of this phenomenon. It is evident that families grappling with economic hardships and striving to retain control over their resources often grapple with the imperative of making profound and difficult choices. In parallel, we have also uncovered instances where families consciously opt for separation as a strategic means to access relief grants or governmental support, primarily motivated by the compelling urgency that arises during times of crisis. As the study has revealed, the genesis of family separations is not monolithic; rather, it is characterized by a rich tapestry of intertwined involuntary and voluntary factors. In essence, this issue manifests as a multifaceted puzzle, one that resists simplification and necessitates a nuanced understanding that acknowledges its complexity. Basically the rural joint families have been fragmented into nuclear families. Donald and Doglas highlight the critical role that family plays in facilitating adaptation to stress, providing emotional and marital support through formal and informal means, especially during and after a disaster. Traditionally, families around the world tend to draw closer together to endure the hardships of a disaster, relying on their familial bonds for support and resilience. However, it is noteworthy that the situation in Nepal appears to be heading in the opposite direction. In the aftermath of the earthquake, Nepali society seems to be moving against this global trend. Instead of families coming together, many have experienced fragmentation, with the transformation of joint families into nuclear ones. This shift prompts us to question whether Nepali society is indeed heading in the opposite current, as compared to the more common global pattern of families drawing closer during times of disaster.
Nepal, being among the world's most disaster-prone countries, witnesses numerous fatalities resulting from calamities each year. The devastating earthquake of 2015 in Nepal claimed the lives of 8,848 individuals, left 22,307 injured, and caused the collapse of 868,042 homes. This research paper aims at delving into the repercussions of the earthquake on change in the family dynamics in Nepal. Unger and Douglas (1980) assert that families play a crucial role in facilitating adaptation to stress, providing emotional and marital support through formal and informal channels during and after disaster events. This study endeavors to ascertain the impact of the 2015 earthquake on the change in dynamics of family relationships and structures.
Introduction The beginning of the third decade of the 21st century has been characterized by the emergence of new problematic security situations, namely the COVID-19 pandemic, the European/global economic crisis, and the military crisis caused by Russia's aggression against Ukraine. The individual or cumulative effects have a negative impact on the sustainability of society at a global level, primarily in terms of significant disruptions to two fundamental defining pillars: economic development and the well-being of citizens. The study aims to demonstrate that current security issues influence both the perception of the respondents regarding the sustainability of Romanian society and the level of willingness among young citizens in Romania (aged [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] to accept the restriction/limitation of certain fundamental rights and freedoms, as well as to fulfill certain constitutional obligations in exceptional situations caused by the mentioned crises. Such attitudes have the potential to impact the efforts of Romanian authorities in developing a sustainable society. --- 1. Literature Review Specialized studies highlight that the declaration by the World Health Organization, in March 2020, of the COVID-19 pandemic has given rise to a new existential reality characterized by the affectation of some fundamental rights and freedoms of Romanian citizens [1], such as: the right to information, education, culture, etc., the right to a healthy environment and health protection, the free movement of persons or freedom of assembly, the right to work, and social protection or economic freedom, which have negatively influenced the standard of living of the citizens , adopted at the UN Development Summit in September 2015 [3]. The novelty of this security risk for humanity [4] and the consequences for the health of the population [5] were at the origin of the voluntary or imposed acceptance, at least in the early phase of the pandemic, of the temporary curtailment, limitation, or removal of some of their fundamental rights and freedoms [6]. The pandemic crisis, which has forced many of the world's countries to severely restrict their economic activities, keeping only the strictly necessary services operational, has also represented the onset of a global economic crisis [7][8][9], the manifestations and developments of which are present, especially in the energy and food sectors, on the European continent , fundamentally affecting the desire to promote a "sustained economic growth, open to all and sustainable, full and productive employment of the labor force" [3] and the standard of living of the population, being at the origin of social convulsions [12]. Public rhetoric about the new global economic crisis [13,14] is diverse and contradictory, with many voices raising conspiracy theories that it is the result of an orchestrated conspiracy to reset the international order and reconfigure the poles of power [15]. The solutions put forward by the leaders of some international organizations, according to which the economic crisis can be managed through individual or collective sacrifices that may affect the standard of living of the population [16], including by accepting the limitation of fundamental rights and freedoms of citizens, seem to be no longer accepted and may represent the seeds of actions that may affect the stability of the continental security equation [17,18], being in discord with the global action program in the field of universal development, found in the ASD. It is also worth mentioning the European and global security effects of the military crisis caused by the "special military operation" undertaken in Ukraine by the Russian Federation , starting on 24 February 2022, which is still in full swing, has amplified the economic crisis [19][20][21] and has created the preconditions for the reconfiguration of the security architecture at the global level [22][23][24]. The political, economic, and security consequences of the crisis in Ukraine [25][26][27] reveal that since the end of World War II, the European continent has not experienced such major security issues that combine specific aspects of military security with those related to human security. We find ourselves in a situation where the objectives of sustainable development at the societal level [28,29] are fundamentally affected, particularly the blatant violation of the objective of "Peace, justice, and strong institutions", which advocates for "promoting peaceful and inclusive societies for sustainable development, providing access to justice for all, and establishing effective, accountable, and inclusive institutions at all levels" [3]. This context raises questions about the fulfillment of the assumed objectives [30]. Recent developments in the conflict in Ukraine have brought into question the possibility of adopting, in exceptional circumstances, measures that may affect the fundamental rights and freedoms of citizens, or which are circumscribed by the fundamental duties of citizens ( [2], pp. [54][55]. One of the aims of this study is to determine the extent to which the public can accept these measures, given that since the beginning of the 21st century we have been witnessing a global "decline in the acceptance of state legitimacy" and "voluntary obligations" towards state authorities [31]. The research framework of this study is centered around the need to interconnect theories/concepts/perspectives found in the specialized literature regarding the effects of the mentioned crises on societal parameters. It also encompasses the authors' research efforts aimed at validating the research hypothesis. --- 2. --- Research hypothesis The hypothesis of this research starts from the fact that current security issues negatively influence the respondents' perception of the sustainability of Romanian society. They develop feelings of uncertainty, concern, fear, and insecurity, which lead to different behaviors in terms of individual expression, such as accepting the restriction/limitation of certain fundamental rights and freedoms, or fulfilling certain constitutional obligations in exceptional situations mentioned. Specifically, it is presumed that: 1. Current security issues negatively influence the degree of acceptability regarding the restriction/limitation of certain fundamental rights and freedoms, as well as the willingness to fulfill certain fundamental obligations. --- 2. Individuals with higher education exhibit higher levels of acceptability regarding the restriction of certain fundamental rights and freedoms, as well as the willingness to fulfill fundamental obligations, compared to those with secondary education. --- 3. Individuals residing in rural areas show higher levels of acceptability regarding the restriction of certain fundamental rights and freedoms, as well as the willingness to fulfill fundamental obligations, compared to those residing in urban areas. 4. In the context of current security issues , the willingness of citizens to exercise certain fundamental rights and obligations can have significant implications for societal sustainability. The theoretical foundation of the research hypothesis is based on the objective analysis of the specialized literature relevant to the addressed issue, as presented earlier. --- Materials and Methods --- --- Procedure The study participants received a questionnaire built on the Google Forms platform, which was distributed nationally, in all regions of Romania, through the Facebook social network. Moreover, the link was posted on various sites with a large number of visitors. Completion of the questionnaire was conditional on the affirmative answer "Yes" to the question regarding permanent residence in Romania and age between 18 and 35 years. The aim was to cover all geographical regions of Romania. --- Measurements The questionnaire comprised 27 questions and was structured in two parts: 1. Obtaining socio-demographic data and opinion on the economic situation, social well-being and safety of Romanian citizens, as objectives of the ASD, necessary for the sustainability of society, as a result of the individual or cumulative influences of the crises we reported ; 2. Identifying the degree of acceptability of restricting/limiting certain fundamental rights and freedoms and the willingness to fulfil certain fundamental duties of citizens in exceptional situations caused by the crises mentioned, that affect sustainable development at the societal level. --- Statistical Analysis of Data The processing of the data obtained from the questionnaire was carried out using Excel, part of Microsoft Office Professional Plus 2021 , and IBM SPSS Statistics 26, installed on a Windows 11 Professional operating system. The collected data were centralized in an Excel file and then visualized, extracted and statistically analyzed. The variables used for the analysis were the participants' opinions on: 3. The influences of the global/European economic crisis on the perception of the impact on the economic situation, social well-being, and citizen safety in Romania, seen as objectives of sustainable development in society; 4. Influences of the military conflict in Ukraine on perceptions of quality of life, social welfare, and security of Romanian citizens, viewed from the previously mentioned perspective; 5. Influences of the economic crisis and the military crisis, seen as factors that negatively affect sustainability and sustainable development at the societal level, on the degree of acceptability of restricting certain fundamental rights and freedoms, and the willingness to fulfill certain fundamental duties by young citizens in Romania, in exceptional military situations. The Chi-square test was used to determine the degree of association. The questionnaire allowed us to extract a dataset that we analyzed statistically, and to determine the degree of correlation between selected variables, we used the Pearson statistical test. This way, we could observe if there is any significant correlation between different variables and the participants' decision to accept or not accept the limitation of certain rights and freedoms or the fulfillment of certain obligations. --- Results The questionnaire was administered to 826 people, their socio-demographic data are presented in Table 1. Note: 1. The processing of the results according to the two age categories was based on the vision of the national legislation in the field of the preparation of the Romanian population for defense [33], which stipulates that "upon the declaration of mobilization and of a state of war or the establishment of a state of siege, the performance of military service as a serving soldier becomes compulsory for men aged between 20 and 35 who meet the criteria for military service", while the law does not apply to adults aged in the sample . 2. Although the level of the study is complex, requiring a high level of knowledge from the respondents regarding certain security issues, the questionnaire did not include indicators related to the socioeconomic status of the respondents, which could provide insights into their knowledge and ability to answer the questions correctly. General approach. The research shows a representative sample of respondents who have been directly affected, to a large and very large extent, by the global economic crisis since 2020 , with the consequences being seriously felt by the population surveyed . The problem analyzed is circumscribed by the "No poverty" and "Zero hunger" objectives found in the ASD [3]. In this context, the degree of confidence of the respondents in the state authorities regarding the way to manage the current problematic situation is very low . It should be noted that 53.26% of the respondents consider that the major economic problems Romania is facing, in the context of the multiple crises in Europe, are likely to affect its national security and sustainable development, to a large and very large extent . In the context of a diverse and controversial rhetoric about the causes of the global/European economic crisis, the idea that it is the result of a globally orchestrated conspiracy to reset the international order and reconfigure the poles of power is shared by respondents to a large and very large extent by 41.21 percent. Note also that a very low percentage of respondents think that the media correctly present the consequences of the global economic crisis also affecting Romania , while a high number of respondents express the opposite opinion . Regarding the causality of the economic crisis manifesting itself at the European/global level, 54.82% of respondents consider the conflict in Ukraine, caused by the "special military operation" undertaken by the Russian Federation, as the source of the crisis to a large and very large extent, while 21.81% do not share this idea . On the other hand, 56.35% of respondents believe that the Russian Federation's operation was largely or very largely aimed at creating a global economic crisis, while 51.21% believe that the invasion of Ukraine is a premeditated action aimed at resetting the world order. Despite a public rhetoric that supports the idea of expanding the military conflict in Ukraine beyond its borders , 33.01% of respondents believe that this is largely and very largely feasible . It is worth noting, in the context of the above and of the European institutions' claim that social solidarity measures need to be adopted, that 37.95% of respondents are skeptical about the European Union's solidarity in the event of a worsening of the economic situation in Romania, while 29.48% of respondents believe that the EU Member States will provide a great or very great deal of the necessary support . On the other hand, it is worth noting that 49.51% of respondents believe that Romania's membership of the European Union is likely to contribute to a large and very large extent to limiting the negative effects on Romania as a result of the crisis in Ukraine . Starting from the precedent of the COVID-19 pandemic, when for medical reasons authorities in many state entities adopted measures restricting/limiting some of the fundamental rights and freedoms of the citizens, the present study highlights that in the context of the deepening economic crisis at European/global level, 16.27% of respondents expressed their agreement to limit some fundamental rights and freedoms of Romanian citizens, such as the right to information, education, or culture, while 63.37% have the opposite opinion . In the same context, only 18.80% of respondents expressed a large and very large agreement on the limitation of the right to work and social protection or economic freedom, as a result of suspending the activities of some of the economic entities or moving part of them to the online environment . Regarding the issue of accepting the lowering of the level of social protection, a percentage of 20.12% agrees largely and very largely with such measures . In the context of the worsening military crisis in Ukraine, which could also have related consequences in Romania, 66.77% of the respondents expressed their disagreement with the willingness to accept the restriction/limitation of fundamental rights and freedoms of Romanian citizens, such as the right to information, education, culture, or health. In the same context, only a small percentage of 18.43% of the respondents expressed their agreement, to a large or very large extent, to the limitation of the right to work and social protection or economic freedom, as a result of the suspension of the activities of some economic entities or moving part of them online . On the other hand, the limitation of some fundamental rights and freedoms of citizens in Romania, such as the free movement of persons or freedom of assembly, would be accepted to a large and very large extent by only 18.08% of respondents . Acceptance of the impairment of the level of social protection that would provide citizens with a decent standard of living is found to a large and very large extent among 18.31% of respondents . Regarding the level of expression of readiness to fulfil fundamental duties of citizens, as provided for in the Romanian Constitution, in the event of the adoption of exceptional measures at the national level as a result of the possible extension of the military conflict in Ukraine, the survey shows a reasonable percentage of respondents expressing agreement to a great and very great extent . In the event of the adoption of such exceptional measures at the national level, only 33.37% of respondents would agree to a large or very large extent to contribute to the defense of the country through direct participation in theatres of operations . Similar percentages are found in terms of willingness to participate in ensuring defense capabilities, such as working in sectors adjacent to the military . Regarding the age of the respondents, the results of the Chi-Square test show a significant association regarding the perception of respondents towards the acceptance/restriction of fundamental rights and freedoms due to economic and military considerations , as well as the level of willingness to fulfill constitutional duties . In terms of gender, the results obtained from the test indicate that there is a significant value greater than the conventional level of 0.05 for questions Q18-Q26. This suggests that there is no significant association between the analyzed variables based on the available data. However, for question Q27, a significant association is observed between these variables. Regarding the residential area of the respondents, the test results reveal a significant association for four questions . For the other questions, the association is not significant. Regarding the education level of the respondents, the interpretation of the results from Table 9 highlights that for all questions , there is a significant association. This certifies that the education level of the respondents is an important indicator reflecting their level of perception. show correlations between different variables by applying the Pearson test. Thus, we were able to determine if there was any strong relationship between the selected variables. --- Discussion The results of the measurements carried out by administering the questionnaire to a number of 826 respondents highlight their perception regarding the impact on the economic situation, social well-being, and citizen safety in Romania-seen as fundamental objectives of sustainable development-as a result of the global/European economic crisis . The consequences of this crisis are seriously felt in the perception of the population surveyed . It should be noted that Eurostat statistics showed, at the end of 2021 , that 34% of Romanians were living in poverty, isolated, or without the possibility to carry out gainful activities [34]. Comparing the data presented, it can be concluded that the evolution of the economic crisis and the situation in Ukraine has led to a heightened perception of worsening poverty among the Romanian population, an aspect that is in discord with the objective of sustainable development "Without poverty" which aims to "eradicate poverty in all its forms and in any context". [3] In this context, the respondents' confidence in the capacity of the state authorities to manage the current problematic situation in the Romanian society is decreasing . This aspect may generate serious problems in terms of public acceptance of public policies aimed at ensuring good governance at national level, given that public trust in public institutions is the basis of their legitimacy, and is a major contribution to ensuring social cohesion [35], aspect in discord with the sustainable development objective "Peace, justice, and effective institutions", which aims at "creating effective, responsible, and inclusive institutions at all levels" [3]. The fact that 53.26% of respondents believe that the major economic problems currently facing Romania are likely to affect its national security and sustainable development to a large and very large extent shows the high level of social responsibility of the surveyed population, despite its young age and the importance of the economic dimension in the national security architecture [36]. The conclusion is in line with recent findings from specialized studies, which highlight that institutional indicators of sustainable development are linked to the economic sphere, with close relationships between "the institutional environment, the presence of threats to sustainable development, and the state of the country's economic security" [37]. Regarding the causality of the emergence of the global/European economic crisis, noteworthy is the high percentage of respondents who believe that it is the result of a globally orchestrated conspiracy to reset the international order and reconfigure the poles of power . Such a perception may lead to an incorrect interpretation of the effects of the phenomenon of globalization, considered to be the "architect" of today's society, with studies supporting the idea that globalization, through its effects, is an important factor in shaping the international security equation, and its impact on the evolution of relations between states is contradictory [38]. In this context, the study shows a high percentage of respondents who think that the media does not correctly present the consequences of the economic crisis affecting Romania , although it is a reference in understanding problematic aspects of society and in forming public opinion. On the other hand, 54.82% of respondents consider the conflict in Ukraine, caused by the "special military operation" undertaken by the Russian Federation, as the source of the economic crisis , action that fundamentally affects the provisions of the sustainable development objective "Peace, justice, and effective institutions" found in the ASD, which aims to "promote peaceful and inclusive societies for sustainable development" [3]. It can be argued that the respondents' opinions have crystallized against the backdrop of media campaigns undertaken both nationally and internationally, which highlight the economic sanctions imposed by the international community on the Russian Federation [39], and the direct/indirect negative effects on the economies of many European countries that are dependent on its energy resources [40], which can affect the sustainability of society at a regional level. It is worth noting that 56.39% of the respondents believe that the Russian Federation's operation was largely or very largely aimed at generating a global economic crisis , but only 51.21% of the respondents believe that the invasion of Ukraine is a premeditated action aimed at resetting the world order . This theory is intensively used in international political and diplomatic circles [41,42]. Regarding the possibility of the military conflict in Ukraine spreading to nearby states or to the European/global level , the study shows that 34.58% of respondents believe that this is feasible to a large and very large extent . In the context of the amplification of the adverse consequences of the global/European economic crisis, felt also by Romania [43], and the clamor of the leadership of the European institutions for the need to adopt social solidarity measures [44], 29.4% of respondents believe that this will be achieved to a large and very large extent . At the same time, 49.51% of respondents believe that Romania's membership in the European Union is likely to contribute substantially to limiting the negative effects of the crisis in Ukraine , figures confirmed by the INSCOP survey conducted in early 2022, which reveals that 54.9% of Romanians believe that Romania's accession to the European Union has rather brought advantages to our country [45]. In this context, the idea can be supported that, in the respondents' opinion, a military conflict, such as the one in the Ukrainian space, is likely to lead to solidarity among the states of the European Union [46], which can be seen in line with the objective of "Partnerships for achieving goals"-aiming to "strengthen the means of implementation and revitalize the global partnership for sustainable development" as provided in the Sustainable Development Agenda [3]. The restriction/limitation of some fundamental rights and freedoms of citizens -which are also found as objectives of the ASD -as possible measures to manage the forms of economic crisis that are manifested at European/global level, is accepted to a large and very large extent by 16.27% of respondents . These options of the respondents are manifested in the context that the analyzed category of rights and freedoms of the individual are guaranteed by international law and provide them with a set of "social opportunities" that allow them to participate in social life [47]. Limiting the right to work and social protection or economic freedom, in the hypothetical situation of suspending the activities of economic entities or moving a significant part of them to the online environment would be accepted to a large and very large extent by only 18.80% of the respondents , since this conduct can be seen in correlation with the objective "Decent work and economic growth", found in the ASD [3]. The conclusions drawn are in line with recent research studies in the field, which highlight the necessity of employment and the promotion of decent work as an imperative for sustainable development [48]. It should be noted that the discussed issue, in a general sense, was regulated at the European level in May 2021 through the Action Plan for the implementation of the European Pillar of Social Rights, with an emphasis on equal opportunities and access to the labor market, fair working conditions, and social protection and inclusion as an imperative for sustainable development [49]. On the other hand, the acceptance of reducing the level of social protection, which ensures citizens a decent standard of living, is found among 20.10% of respondents . An analysis of the previously presented data highlights that there are no significant differences between the choices of the respondents in relation to gender . Neither analyzed variable shows differentiations in terms of their perception. Regarding the age of the respondents, the results of the applied Chi-Square test show a significant association concerning the perception of the respondents regarding the acceptance/restriction of fundamental rights and freedoms due to economic and military considerations , as well as the level of expression of their willingness to fulfill constitutional duties . In relation to the residential area of the respondents, the test results reveal a significant association for four questions , as defined in Tables 678. In order to determine the degree of correlation between the acceptance of limiting fundamental rights and the effects on social sustainability, we conducted the Pearson correlation test between the variable "degree of experiencing the consequences of the economic crisis" and the variables "limitation of fundamental rights and freedoms of Romanian citizens, such as the right to information , education , culture , health" , "limitation of the right to work and social protection or economic freedom, as a result of the suspension of activities of some economic entities or their partial move to the online environment" , and "impact on the standard of living due to the reduction of social protection that ensures a decent standard of living for all citizens" . Observing the results in the Table 10, we understand that people who feel the effects of major economic problems in Romania and are aware of its effects on sustainable development have a high degree of acceptability regarding the diminution/limitation of some fundamental rights, the correlations being very strong. The restriction/limitation of some fundamental rights and freedoms of citizens in Romania -found, as mentioned before, among the objectives of the ASD-as possible extreme measures in the situation of the exacerbation of the military crisis in Ukraine, would be accepted to a large and very large extent by 15.43% of respondents . The results highlight the importance of preserving the fundamental rights and freedoms of citizens, including in the context of problematic situations caused by military conflicts. In the same context, only 18.33% of respondents agreed to a large or very large extent with the limitation of the right to work and social protection or economic freedom in the case of temporary suspension or full or partial transfer of economic activities online . Limiting certain rights and freedoms, such as the free movement of persons or freedom of assembly, would be accepted to a large and very large extent by 18.07% of respondents . Acceptance of the impairment of the level of social protection that would provide citizens with a decent standard of living is found in the same parameters of interpretation by 18.31% of respondents . Qualitative analysis of the data presented above shows that female respondents are less willing to accept restrictions/limitations of rights and freedoms due to military conflicts. Willingness is also lower among urban respondents compared to rural respondents. A higher degree of readiness is found among respondents with university education and those in the 21-35 age sample. It is worth mentioning that the study results are in line with recent research in the field, which highlight that the availability of exercising citizens' rights and duties should be closely correlated with the indicators of citizens' quality of life, seen as an imperative for achieving sustainable development at the national level [50,51]. It should be noted that the restriction/limitation of certain fundamental rights and freedoms of Romanian citizens in the event of a military crisis is carried out through administrative and judicial measures adopted by the state authorities, in accordance with the provisions of Article 15 of the European Convention on Human Rights, which grants the possibility, in exceptional circumstances, of "temporary, limited and controlled derogation from the obligation to ensure certain rights and freedoms under the Convention" [52]. Regarding the degree of trust in "the extension of the military crisis in Ukraine to the states in the vicinity of the Russian Federation " , correlated with "the limitation of some fundamental rights and freedoms of citizens in Romania, such as the right to information, education, culture, health, etc." , "the limitation of the right to work and social protection or economic freedom, as a result of the suspension of the activities of many economic entities or the relocation of part of them to the online environment" , "limitation of fundamental rights and freedoms of Romanian citizens such as the free movement of persons or freedom of assembly" , and "impairment of the standard of living as a result of the reduction in the level of social protection, such as to ensure a decent standard of living for all citizens" , we note that the results show strong correlations, according to Table 11. The Pearson test reveals strong correlations both for people who believe that the war in Ukraine will spread to other countries, including Romania, and their high degree of acceptance of the curtailment of fundamental rights, and for people who do not have a high degree of confidence in the spread of the war and their low degree of acceptance of the curtailment of fundamental rights. Willingness to fulfill certain fundamental duties of Romanian citizens, as provided for in the Romanian Constitution-approached from the perspective of the objective "Peace, justice and effective institutions" of the ASD [3]-in the hypothetical situation of adoption at national level of exceptional measures in the event of an extension of the military conflict in Ukraine, is manifested to a large and very large extent in 37.83% of respondents . Likewise, only 33.37% of the respondents would agree, to a great and very great extent, to contribute to the defense of the country through direct participation in theatres of operations . Willingness to participate in providing defense capabilities in sectors adjacent to the military domain-which can be seen as circumscribed by the "Industry, innovation, and infrastructure" objective of the 2030 Agenda for sustainable development [3]-is manifested to a large and very large extent by 32.34% of respondents . Qualitative analysis of the data presented above shows that the share of male respondents is higher in expressing the results of the options regarding the willingness to perform basic duties , as well as respondents from rural areas , those with higher education , and those in the age sample 21-35 years . The analyzed issue revolves around the civic engagement of citizens in community life, as a goal for achieving sustainable development of society, as also highlighted in recent studies [53]. The degree of trust in "the spread of the military crisis in Ukraine to states in the vicinity of the Russian Federation " , correlated with "express loyalty to the country, as a fundamental duty expressed in the Romanian Constitution " : fulfilment of citizenship obligations)" , "contribute to the defense of the country through direct participation in theatres of operations" and "contribute to the defense of the country through direct participation in ensuring defense capabilities " , reveal very strong correlations, according to Table 12. The Pearson correlation test applied between the variables mentioned above reveals a very high degree of correlation both in the case of people who believe in the expansion of the conflict in Ukraine and their acceptance of having some of their rights guaranteed by the Constitution diminished, and in the case of people who believe that the war in Ukraine will not go beyond its borders and the low degree of acceptance of the limitation/diminution of some fundamental rights. --- Research Limitations In the context that this study is among the first in Romania to address such a topic, it is inevitably characterized by some research limitations. Considering that the current research strictly focuses on drawing conclusions of interest through statistical analysis of data obtained from the online questionnaire administered to 826 adult individuals from Romania, a significant limitation is the qualitative nature of the study, as it is not representative of the entire population of Romania. A second limitation is represented by the data collection method since only individuals with Internet access [54] had the opportunity to access and complete the mentioned questionnaire. Additionally, there is a possibility of subjective self-selection of respondents [55] and the redistribution of the questionnaire among groups of individuals with similar views on the researched subject [54]. It should be noted, firstly, that respondents' access to the Internet varies depending on their social class, and secondly, the likelihood of respondents accepting to participate in the questionnaire may depend on some of the socioeconomic factors that the questionnaire itself evaluates. A third limitation of the research is the inability to clearly delineate the influences of the economic crisis and the military crisis in Ukraine on the research objectives . The temporal overlap and interdependence of the mentioned crises can influence how citizens perceive their effects. A fourth limitation is determined by the fact that the questionnaire did not include indicators related to the respondents' socioeconomic status , which could provide insights into their knowledge and capacity to respond accurately to the questions. Given the complexity of the study, which addresses various security issues, this limitation affects the assessment of respondents' perspectives. --- Conclusions The new global/European economic crisis, which started in 2020 with the COVID-19 pandemic, has produced negative consequences, which have been felt in Romania as well in terms of sustainable development. These were amplified by the security crisis caused by the "special military operation" launched by the Russian Federation in Ukraine. This study shows that, in this new security context, young Romanian citizens are very concerned about the preservation of their fundamental rights and freedoms, as stipulated in the Romanian Constitution , showing a low degree of acceptance of their restriction/limitation, even in exceptional situations. These behaviors of the younger population are natural in today's society [56], as a result of educating young people in the spirit of freedom, understanding, and pragmatism in interpreting events in an objective manner. It involves the development of critical thinking and the abandonment of certain prejudices that could lead to customary approaches to the diverse and complex issues faced by society today. However, these behaviors may impact the national sustainable development goals , as the relationship between human rights and sustainable development objectives is extensively debated in specialized circles [57,58]. The willingness to fulfill fundamental constitutional duties of citizens in the event of exceptional measures due to the escalation of the military conflict in Ukraine is a major concern for only about one-third of the respondents. Higher values are found among male respondents, those from rural areas, those with higher education, and those in the age group of 21-35 years. Based on the presented findings, we believe that it is necessary to develop a sense of national and patriotic spirit, foster devotion, and promote a culture of security to create a sustainable future [59,60]. It is important to educate the youth in the spirit of sustainable societal development [61], both within the educational process and through the influence of opinion leaders, the media, or institutions in the fields of defense, public order, and national security. The study confirms the research hypothesis, namely that current security issues negatively influence the level of acceptability of restricting/limiting fundamental rights and freedoms and the availability of fulfilling certain fundamental duties , which can have significant implications for societal sustainability. The mentioned differentiations can be explained if we accept, on one hand, that the level of perception, awareness, and involvement in managing problematic issues at the current societal level is directly proportional to the citizens' level of education, and on the other hand, that the population residing in rural areas has a unique connection to their place of origin. Considering the results of this study, we believe that it could contribute to the discussions related to the development of necessary public policies for the implementation of the National Recovery and Resilience Plan [62], specifically the "Good Governance" section, which is linked to public sector reforms, increasing judicial efficiency, and strengthening the capacity of social partners. --- Data Availability Statement: Data can be requested from the corresponding author. --- --- Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of "Constantin Brâncus , i" University of Târgu Jiu. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Purpose: The highlighting of how current security issues (the economic crisis at the European/global level and the military crisis in Ukraine)-seen as factors of negative influence on sustainable development at the societal level-influence the level of willingness of young citizens (aged 18-35) in Romania, regarding the acceptance of restrictions/limitations on certain fundamental rights and freedoms, as well as the fulfillment of certain constitutional obligations in exceptional situations. It was considered opportune to conduct this study, given the fact that since the end of World War II, the European continent has not experienced such major security issues that combine specific aspects of military security with those specific to human security, and that generate negative effects on the community's efforts to promote peaceful and inclusive societies for sustainable development. Methods: The study was based on an online questionnaire administered to a total of 826 individuals permanently residing in Romania, aged between 18 and 35. The data were collected from 1 October to 15 October 2022, at a reasonable interval following the onset of the mentioned crises, assuming that the opinions of the interviewed individuals regarding their negative impact on sustainable development from the perspective of internal societal life are well-formed. The methods used include statistical analysis and focused on identifying and assessing the degree of acceptability of restrictions/limitations on certain fundamental rights and freedoms, as well as the willingness to fulfill certain constitutional obligations. Additionally, empirical research on the issue was conducted in accordance with the available bibliography. Results: The study reveals the respondents' level of perception regarding the impact on sustainable development of society, from an economic perspective, social well-being (41.33%), and citizen safety (53.26%), as a result of the global/European economic crisis. The consequences of this are strongly felt among the interviewed population (61.09%), leading to a decrease in their trust in the state authorities' ability to manage the situation. The cause of the global/European economic crisis is complex, a large part of the respondents (41.21%) believing that it is the result of a globally orchestrated conspiracy to reset the international order and reconfigure the poles of power, and more than 50% of the respondents considering the conflict in Ukraine as the main source. A percentage of 29.28% of the respondents consider that social solidarity at the level of the European Union (EU) is feasible to overcome the negative influences of multiple crises on domestic societal life, and 49.51% of the respondents believe that Romania's EU membership is likely to contribute substantially to limiting the negative effects of the crisis in Ukraine. The restriction/limitation of certain fundamental rights and freedoms of citizens, as possible extreme measures for managing the effects of the mentioned crises at the national level, is accepted by a small percentage of respondents (15-20%), while the willingness to fulfill certain constitutional obligations of citizens is present in approximately one-third of the interviewed population. Conclusions: In the context of the new global/European economic crisis and the military crisis in Ukraine, which impact the sustainable development of society and the community's efforts to promote peaceful societies, young citizens in Romania (aged 18-35) are deeply concerned about the preservation of fundamental rights and freedoms as stipulated in the Constitution of Romania. They demonstrate a low level of acceptance for the restriction/limitation of these rights and freedoms, even in exceptional situations of an economic or military nature. The same low degree of readiness is also found in the fulfillment of some
Introduction Education is critical to China's complicated social structure. Education transcends basic academic endeavors to become a cornerstone of progress, with a history of appreciating wisdom and knowledge. China's fast economic progress and worldwide clout are intrinsically connected to its strong educational system. As a crucible of invention, education propels the country's technical growth and creates a competent workforce capable of driving industry. Moreover, education is a cornerstone for connecting many ethnic groups and teaching the moral ideals required for national progress. China has pursued the goal of ensuring quality education for all its residents since time immemorial. However, behind the rising skylines and busy metropolises, one critical concern remains the arduous plight of the rural teacher. Education in rural China includes both aspiration and adversity. Rural teachers encounter hurdles despite the government's efforts to bridge the gap between urban and rural schooling. They are caught between limited resources, geographical seclusion, and the fundamental need to impart knowledge in an ever-changing world. Their difficulties stem from the stark reality of educational inequity, with urban areas benefiting from abundant resources and contemporary facilities. Rural schools, on the other hand, confront underfunding, outmoded instructional materials, and poor infrastructure. Additionally, the cultural and social settings prevalent in rural complicate matters for teachers. Differences in urban and rural living might result in a cultural divide that affects rural educators' confidence and professional identity. This sense of isolation leads to feelings of inadequacy, limiting their capacity to reach their full potential as educators. Among these issues, the salary problem becomes particularly pressing. Rural teachers are frequently paid less than their metropolitan counterparts, despite the daily hardships they encounter. This economic disparity raises severe concerns about the value society places on rural education and those dedicated to it. --- Teachers' Social Status Since ancient times, China has placed high importance on education and has a long history of honoring teachers. Teachers are change agents [1]. Teachers have an important role in developing pupils' thoughts. They lead students through their academic endeavors while instilling values, ethics, and critical thinking skills necessary for navigating an ever-changing society. Beyond the classroom, teachers are cultural stewards, handing along centuries-old traditions and ideals. Their effect extends beyond academics to foster character development and the formation of responsible people who contribute positively to society. Teachers are a conduit for cultivating dreams and achieving potential in a country that sets a high value on education. Moreover, Chinese society's urban and rural settings are polarized. Economically, the Chinese government has assisted rural communities in overcoming poverty and living a prosperous life. At the educational level, attaining educational equality is far more challenging. Rural and urban education are two distinct yet interconnected components of a country. Urban education benefits from more resources, modern technology, and a diverse range of extracurricular activities since it is strategically positioned. It introduces pupils to new cultures, ideas, and possibilities while creating a dynamic learning environment. Rural education, on the other hand, faces distinct problems such as limited resources, physical isolation, and cultural restraints. When resources are compatible, the quality of schooling in rural and urban areas remains the same. The role of rural teachers is critical to achieving educational equality in China. They must do everything possible to give rural students access to quality education in resource-constrained and geographically remote locations, contributing to a more inclusive society. Therefore. Rural education has far-reaching ramifications for societal progress and equitable development. Rural teachers are educators and catalysts for positive change in rural regions. Their contributions foster educational, social, and economic advancement, shaping the trajectory of individual lives as well as the trajectory of the country. --- Reasons --- Reason 1: Few Educational Resources Rural teachers are vital for teaching and promoting equality in education between urban and rural areas. However, kids require assistance in the guiding stage due to the social context. First, the social milieu in which rural teachers reside adds to their difficulties. It is commonly recognized that the social climate in rural places is significantly more difficult than in urban areas. One of them is the general populace's low literacy rate. Rural teachers face distinct cultural dynamics, limited social resources, and traditional thinking methods. Considering the Liangshan Prefecture in Sichuan, China. For a long time, Liangshan Prefecture has been isolated and has few educational resources. Moreover, the average age of the masses' education is less than six years old. The rural young adult labor force has an illiteracy and semi-illiteracy rate as high as 23.48%, and a considerable section of the population does not know the Chinese language or Chinese characters [2]. People in Liangshan Prefecture, which has a low literacy rate, require assistance in providing a better education for their children. Brown and Park show that less educated parents are less likely to educate their children [3]. Less educated parents may value education less, have inherited lesser academic performance in their children, or need to be equipped to supply extra learning inputs. They may then assign responsibility for their children's education to rural teachers. However, teaching becomes increasingly challenging for rural teachers due to low literacy rates, because their instructional content may be halted because the children's literacy is low. --- Reason 2: Lack of Transportation Second, the inadequate transportation network is a component of the social environment. Vasconcellos says that rural areas in developing nations confront severe transportation constraints due to physical isolation, social and economic situations, and usually limited transportation availability [4]. Yi et al. indicate that teachers are willing to work near their homes [5]. Lack of transportation may increase teacher turnover in remote locations. This is because a lack of dependable transportation choices may impede instructors' capacity to travel to and from school, resulting in punctuality and regular attendance issues. As a result, the teaching and learning process may be disrupted, affecting students' educational experiences. Furthermore, insufficient mobility may limit teachers' access to professional development, workshops, and training sessions in more urban locations. A lack of exposure to new teaching methods and technological innovations may impede their professional development and the quality of education they give. This is because many rural schools are in outlying areas far from urban hubs. Teachers may feel lonely and alienated from the larger educational community due to weak transportation networks. --- Dilemma --- Dilemma 1: Unavailability of Teaching Resources Low literacy rates and inadequate transportation networks contribute to the various quandaries that rural teachers face in their employment and daily lives. The public can better grasp the multifaceted challenges that rural teachers encounter by understanding the interplay between these basic causes and the consequent issues. One expression of the challenge is rural teachers' limited access to resources. Educational materials significantly impact the education quality and student learning experiences. A region's absence of educational resources is frequently linked to government policies and resource distribution. The government has a significant impact on the educational environment, and its decisions directly impact the availability and allocation of resources. China's fiscal system has struggled to generate adequate money, resulting in severe dispersion of fiscal obligations and a financial crisis for poor county governments [3]. When rural governments face financial difficulties, there are significant disparities in public investment in education and teacher quality. This is because state education financing is vital to rural towns. Adequate funding can help remote schools solve deficiencies in educational facilities, teacher training, and curriculum development. Inequality can be perpetuated by insufficient funding. According to China's Ministry of Education, state education spending should be "no less than 4% of GDP" [6]. However, governmental spending on education in 2010 was only 3.66% of GDP [6]. Inadequate government investment directly impacts the quality and accessibility of rural education. Inadequate funding directly impacts the availability of teaching resources such as textbooks, teaching aids, classroom technology, and learning materials. Teachers in rural locations are frequently compelled to employ obsolete or insufficient materials, which can impact their efficacy in the classroom. Furthermore, fiscal changes have decreased redistributive budget transfers while increasing inequality [3]. The quality of rural education, on the other hand, will deteriorate over time. Second, rural teachers need help with using libraries as an instructional resource. Libraries are essential in education because they are lively hubs of learning, exploration, and community engagement. Every school in a city has a library, which offers pupils a quiet room for after-school study and is outfitted with sophisticated network equipment for accessing materials. Conversely, a library may be a luxury for teachers and pupils in rural locations because libraries and bookstores are sometimes absent or inaccessible in remote rural [5]. Libraries are valuable resource centers for instructors, providing a wide range of textbooks, reference materials, and digital tools to enhance classroom education. Libraries are more than just storage facilities; they enable teachers to find inspiration, access supplemental learning materials, and experiment with new teaching approaches. In addition to materials, libraries function as professional development platforms, allowing teachers to share ideas, review curricula, and collaborate to improve teaching techniques. Furthermore, the scarcity of libraries in rural areas restricts teacher development and denies pupils the right to read. The study shows that students in impoverished nations frequently have limited access to reading resources [5]. In their daily lives, less than 10% of rural Chinese primary school kids indicate that their parents have ever purchased their books, and approximately 70% of pupils have no more than ten books at home [5]. Rural Chinese language teachers face unique challenges when teaching because rural pupils have limited access to reading resources. Chinese language learning is dependent on the accumulation of reading. For example, there is no library in a mountain village primary school in Guizhou, and the language instructor must trek a long distance into town to assist kids with extracurricular reading. Long-term teacher complaints have resulted in a high teacher turnover rate in rural areas. --- Dilemma 2: Low Wages Teachers' wage is an essential factor to consider. Salary discrepancies in the education sector substantially impact rural schoolteacher recruitment, retention, and overall quality. One of the signs of being in distress is poor wages for rural instructors. The rural teacher compensation conundrum is a serious issue that directly impacts educational quality and the sustainability of learning environments in remote places. An examines that teaching is a highly specialized profession [7]. Furthermore, the level of teachers' wages and income influences whether good job applicants are attracted to teaching positions and whether existing teachers stay in their positions or migrate to schools with greater income levels [7]. In other words, wages for educators in primary and secondary schools are positively related to teaching consistency. In China, educators in rural areas are paid far less than those in urban ones. This wage inequality reflects economic disparities and continues the cycle of unequal access to great education. Society frequently regards rural teachers as selfless and outstanding. They are courageous enough to forego the city's high-quality resources to labour in the countryside to reduce China's educational disparities. On the other hand, rural teachers are working people who need to be compensated for their daily expenditures. Despite their dedication to shaping the next generation's growth, rural teachers frequently require financial assistance. They are not only rural educators, but they also act as surrogate parents for rural orphans. They need help to match their social obligations with their limited wages, frequently leading to burnout and high turnover. Before 2001, there was little difference in average compensation levels between teachers and other public agencies. There was also a modest compensation disparity between urban and rural teachers [7]. However, the true compensation disparity results from rural primary and secondary school teachers' salary arrears. This has a major impact on the stability of rural instructors and leads to a teacher shortage in rural schools [7]. The salary disparity directly impacts the overall quality of education in rural areas. It makes attracting and retaining talented educators difficult, resulting in higher turnover rates. From September 2012 to February 2016, for example, a total of 9,941 teachers in Yunnan province resigned to join the government or other organizations, the vast majority of whom were rural teachers [8]. Rural teachers are crucial to providing pupils with a positive learning experience. Rural schools face a teacher shortage as experienced teachers depart for greater prospects in cities or other sectors. This scarcity weakens educational continuity and exacerbates disparities between urban and rural areas. Rural teachers' low salaries are merely a surface issue; the underlying conclusion may be that rural education is undervalued. It fosters a cycle of school underinvestment, harming teachers and students who deserve the same level of education as their urban counterparts. In rural communities, the persistence of unequal access to education exacerbates existing social and economic imbalances and impedes efforts to break the cycle of poverty. --- Dilemma 3: Complex Shift in Roles Low wages are the primary cause of rural teachers' high turnover, but concealed behind the wages is a social phenomenon specific to rural China. Many children are left behind in classrooms in rural China's migrant labor-exporting towns. This poses a significant issue at the school level, as teachers must cope with day-to-day classroom management, including many left-behind students who are perceived as undisciplined and disruptive [9]. The enormous number of youngsters left behind causes unique challenges for teachers. Rural teachers believe their muddled sense of identity contributes significantly to their challenges. Left-behind children are a direct result of China's rapid economic development and a significant social phenomenon generated by farmers' relocation to cities in the context of social transformation. Children under the age of 16 must be cared for by others since both or one of their parents must work or do business [10]. As economic opportunities entice parents to the city, more youngsters are left behind to care for relatives or themselves [10]. While this separation is meant to help families, it unintentionally presents several educational challenges for these children and teachers. Because the parents of left-behind children are missing, most of their caretakers are elderly grandparents. This is because the absence of parents hurts the heart and academic achievement of left-behind children. As a result, teachers are responsible for caring for the youngsters. Rural teachers, notably beginner teachers, must fulfill numerous roles, including educators, surrogate parents, and discipline makers, to ensure that these pupils benefit from their educational experience [11]. According to a Chinese saying, once a teacher, always a father. This aphorism imposes an unyielding load on educators. Rural teachers may be more than just knowledge dispensers or caregivers for at-risk youngsters. When they have many identities, the lines between their personal and professional lives might get blurred. This complex shift in roles may result in a desire for greater clarity in their professional identity, and so their obligations extend beyond the traditional scope of teaching. Also, teaching is regarded as one of the more emotionally taxing jobs, necessitating emotional work [12]. Emotional labor suggests that teachers must effectively regulate their emotions in the classroom. However, their hearts are more delicate because left-behind children have been separated from their parents for an extended period. They frequently react emotionally in the classroom over trivial matters. Although they may perceive this as a lag in classroom progress, for humane reasons, this is when rural teachers may need to play the role of parents to console these children. Because of the necessity to flip between the two identities for extended periods, this sense of identity ambiguity makes it difficult to strike the correct balance between academic ambitions and community expectations. It may contribute to the trend of instructors leaving the profession. This is because teachers must attend to the minor details in the lives of the students who have been left behind while sticking to educational standards. Teachers may face emotional and mental strain because of this. Furthermore, the long-term blurring of identities between the "parent" and "teacher" jobs may impede their ability to advocate for their needs, pursue professional development opportunities, or address personal growth. --- Solutions While the causes indicate the complexities of the rural teacher problem, there is also a need for various initiatives by appropriate authorities to address the situation of rural teachers. The educational landscape in rural communities can be improved through focused solutions and joint efforts by key sectors to build a brighter future for teachers and students. First, the government should raise teacher compensation in rural areas. Chronically low incomes cannot cover teachers' fundamental living needs in rural locations due to poor living conditions. The first article in the government's Rural Teacher Support Program discusses boosting wage subsidies for rural teachers and improving rural teachers' living conditions [13]. Also, the government should raise the pay of rural teachers may restrict teacher migration. The leading cause for high teacher mobility in rural areas is low pay. According to studies, teachers respond to salary, and better wage reduces the risk of teacher mobility [5]. Once larger pay is provided, rural teachers are more likely to stay in their posts, resulting in better educational continuity and a more stable learning environment for kids. Second, rural school curriculum reform should be strengthened because curriculum reform directly impacts teachers' instructional methods and educational quality. Because China with significant disparities in schooling between regions, it is critical that the current pilot of the new curriculum and the actual situation of rural schools be considered in the curriculum's development [14]. Rural towns frequently have distinct cultural, economic, and environmental features influencing kids' learning experiences. If teachers apply an urban curriculum to rural children, the effects may be disastrous. Rural teachers can strengthen the connection between learning and community by connecting the curriculum with the local context, making education more relevant and applicable. Finally, rural schools should provide some psychosocial assistance to rural teachers. The ambiguity of identification that the issue of left-behind children brings to teachers might cause some self-doubt and add to their daily workload. Parents are their children's first teachers, and when parents are absent for an extended time, no teacher can solve the children's psychological difficulties. Teaching and parenting raise rather than lessen the everyday workload of rural instructors. According to surveys, non-educational chores account for 33.49% of a teacher's daily working hours [8]. Due to a lack of home education for children, these responsibilities are finally delegated to teachers. The long-term sense of identity uncertainty and high workload substantially impact teachers' mental health, even leading to depression. It is critical to equip teachers with psychosocial assistance. The isolation of their employment, along with the weighty duties they bear, frequently leads to increasing stress and burnout. Psychosocial support improves teachers' well-being and directly impacts their teaching performance by fostering stronger teacher-student interactions, enhancing classroom dynamics, and fostering a more positive learning environment. --- Conclusion In the end, the issues faced by rural Chinese teachers show a multidimensional socioeconomic problem that reaches far beyond classroom challenges. These educators are the lifeblood of education in distant regions, where they must contend with a complicated interplay of isolation, limited resources, inadequate infrastructure, and societal views. As a result, their well-being, professional development, and educational quality are frequently threatened. Nonetheless, they have shown tenacity in the face of adversity, as evidenced by their undying dedication to their students and rural education. However, solutions to these quandaries are still feasible. Government help in raising rural teachers' wages, government commitment to revising rural curricula, and psychosocial support for teachers from all societal sectors are critical components of creating an enabling environment that empowers rural teachers. By emphasizing the importance of the teacher's role, the standing of rural teachers can be elevated, and a new generation of educators can be inspired to enter the field, bringing new perspectives and enthusiasm to the task at hand. Furthermore, the rural Chinese educational environment is intricately related to the greater picture of China's development and prosperity. Closing the educational divide between urban and rural communities is a moral necessity and a critical driver of equality, economic prosperity, and social cohesion. When rural instructors are given additional development tools and resources, rural kids can receive a quality education, helping to break the cycle of rural poverty and contribute to the prosperity of their region. The difficulties that China's rural teachers face in achieving fairness in urban and rural education must be addressed. Their condition is a microcosm of larger socioeconomic concerns that necessitate the collaboration of governments, educational institutions, communities, and individuals. Finally, the situation of China's rural teachers is bearable. Governments can pave the path for a more inclusive, empowered, and successful future for rural educators and the kids they serve by working together with collective determination and a commitment to educational justice. The obstacles China's rural teachers face in achieving fairness in urban and rural education must be addressed. Their condition is a microcosm of larger socioeconomic concerns that necessitate the collaboration of governments, educational institutions, communities, and individuals. Finally, the situation of China's rural teachers is bearable. Governments can pave the path for a more inclusive, empowered, and successful future for rural educators and the kids they serve by working together with collective determination and a commitment to educational justice.
The difficulties that rural teachers face in China are complicated and varied. Not only do these teachers face minimal educational resources, but they also face low pay and a sense of isolation. However, underlying these quandaries is a great lot of societal duty. This study uses the literature review method to show the issues that teachers confront and provide solutions. First, the government can decrease turnover and address their fundamental requirements by boosting teachers' wages. Second, the government can deepen rural curriculum reforms to allow teachers to design content better and increase student participation. Finally, more extraordinary psychological support services from all sectors of society for rural teachers can help improve teacher productivity and mental health. In conclusion, by stressing rural education and giving essential support to rural teachers, the government may increase the quality of rural education and the development of rural areas. More importantly, boosting rural education is a critical step toward achieving equality in education between urban and rural areas.
Introduction COVID-19 vaccines are highly effective at preventing infection and reducing the likelihood of severe illness and mortality, yet a considerable number of people in the USA are not fully vaccinated [1,2]. Latinx people are both at high risk for COVID-19 infection, hospitalization, and death, but during the early roll-out of the vaccines uptake was more than 10% lower in Latinx than their non-Latinx White and Black counterparts. An earlier review found that this inequity of COVID-19 vaccine uptake may be related to a combination of vaccine hesitancy and lack of easy access to the vaccines [3]. That review on vaccine hesitancy among Latinx mentioned the effects of systematic racism including government and medical mistrust expressed as concerns about the safety and side effects of the vaccines. Also prominent in that review were concerns about access to vaccine that included language barriers, affordability, and apprehension about having to take time off from work to get the vaccine or recover from possible side effects [3]. Interventions that address Latinx community-specific COVID-19 vaccine concerns and access are crucial to boost uptake and protect Latinx people from COVID-19. This review sought to identify strategies and efforts around community outreach and engagement that were made early after introduction of the vaccine. Efforts detailed provide a range of actions that can be implemented and evaluated now. An inventory of structure and activities can then become systematized and codified in preparedness plans for preemptive rapid response early in programmatic roll-out to swiftly achieve the goal of optimal protection for the Latinx communities in current and future bio-events. --- Methods Because this report was being generated within the first 9 months of COVID-19 vaccine rollout, it was likely that this time period was insufficient to report on more than plans and descriptions of programs. The decision was made to conduct a rapid review [4] and to widen the traditional academically oriented search engines to include access to gray literature. A literature search was conducted in July and August 2021, and the following search engines were used: Google and PubMed. Search terms included three parts. The first part was "COVID-19 vaccine," The second part was "Hispanic" OR "Latino." The third terms were "community" OR "access" OR "collaboration" OR "outreach." For articles to be included, they must have been published between December 2020 and September 2021, which aligned with the authorization and distribution of COVID-19 vaccines in the USA. Reports needed to be original reports, and not a report of another article. For the purposes of this report, the term Latino/a/x was used except when names of organizations, titles, or tables of reports used Hispanic in which case that terminology was retained. --- Inclusion/Exclusion Criteria Publications were limited to the USA, written in English, and published during and after December 2020. The rationale for the time frame was that reports before this time examined population knowledge, attitudes, and intentions for a hypothetical COVID-19 vaccine, whereas literature available since December 2020 was more likely to have the context of imminent and subsequent availability of a COVID-19 vaccine. Abstracts only were excluded. Media stories or commentaries that were only about content from an otherwise published report without any additional analyses were excluded in favor of the original referenced material. --- Data Abstraction Each primary search term was assigned to an individual team member with a second member replicating the search. Data abstraction included sourcing of data: primary or secondary. Line listings captured the following characteristics: author, title, journal or website , date of publication, literature type . In addition, to address all the outcomes in this review, we created columns with indicators for availability of information on equity, interventions, communication strategies, logistics, vaccine hesitancy, and vaccine uptake, respectively. Categories were created to denote the specific group discussed in the article based on race, ethnicity, and immigration status. Study methods were recorded: sample strategy and final size, study design, data collection, and analytic methods. Key findings were recorded. Validation for both PubMed and Google searches was performed for select search terms to examine for relevance, significance, and validity for study inclusion. --- Analyses and Reporting Study findings in accordance with the outcomes were reported by DD, JD, LT, SAM, and DV using narratives and tables. Data were grouped by their outcomes and study populations . We include a disclaimer that we are using categories from the survey tools. We conducted a narrative synthesis of the data to identify common findings. --- Results A total of 150 articles found on Google were screened. Of those, 33 were included. Six of the included articles were either recommendations or expert opinion, and the remaining articles were either descriptions of interventions to increase vaccine uptake and evaluations of programs to increase vaccine uptake among Latinx community . A total of 54 papers were found on PubMed. Of these, one was included. Articles were removed from both Google and PubMed primarily because they did not describe or provide information on vaccine interventions specific to Hispanic/ Latinx communities, for example, studies that reported ethnicity as a factor , studies on hesitancy, news articles that had one line on Latinx communities, government websites that say something along the lines of "click here for Spanish translation", or re-postings of the same article on different platforms. There were also many irrelevant articles/studies that were not remotely on-topic, but would show up because of certain trigger words. Table 1 shows an outline of partnership strategies for building trusted communications and access around COVID-19 vaccines among Latinx communities. These include differing levels of participation , mechanism for information dissemination , and access through pop-up events . Expert opinion and cases are described below. --- Trusted Communication --- Expert Opinions on Best Practices for Communication Around Vaccine Promotion General recommendations by experts describe basic principles and detailed nuances to increase COVID-19 vaccine uptake among Latinx communities. Communication should highlight critical details of the vaccine roll-out process; it must be clear that the vaccine is free, does not require insurance, and the identification requirements for the local area [5,6]. Alternative options for identification are recommended including no ID required or alternatives , as many Latinx people may not have state approved identification provided. Also, immigration policies impact many Latinx families -experts recommend the importance of conveying that vaccine status will not have an impact on immigration status or public charge which is essential to encourage vaccinations for concerned Latinx individuals. Moreover, content of messaging should touch on the safety and efficacy of the vaccines [5]. All content should be tested with community members regularly to ensure relevance and clarity [7]. Vaccine promotion must give equal attention to the delivery of these pro-vaccine messages. Consistently and unsurprisingly, experts emphasize the need for Spanish-language materials for Latinx communities [5][6][7]. This would ideally be sourced by Latinx community members or advocates who can contextualize promotion to the local community, delivering information on vaccination sites, requirements, and other necessary details [5][6][7]. Communication channels such as radio and social media are recommended dissemination methods [5,7]. --- Description of Interventions Used to Promote Vaccine Uptake The breadth of COVID-19 vaccine misinformation provoked many government officials, healthcare professionals, and community leaders to respond with outreach and educational efforts. These efforts, by and large, have neglected Latinos who speak primarily Spanish, work in shift-work industries, and might not have established trusted health information. In response to the dearth of Latinx-focused responses, concerned community members and organizational stakeholders created a diverse array of mechanisms to communicate COVID-19 information. All the interventions in this review had a Spanish-language component. One group included Mayan Indigenous languages for Latinx that may have Mayan ancestry as well [8]. Regardless of the language, it is important for the level of Spanish to remain colloquial; translating materials with local Latinx health workers can provide this high degree of translation quality [9,10]. Furthermore, an integral component with all interventions in this section of the review was the partnership with local Latinx community leaders. They could understand the nuances of vaccine hesitancy and access issues within Latinx communities. Some interventions included widely known persons, such as celebrities, online creators, soccer players, and media personalities [11,12]. Their followings are easily leveraged for any communication effort. Physicians were another critical component to many of these interventions, as they are seen as a reliable source of information [13][14][15][16][17]. Programs that build on bilingual outreach explicitly from physicians through social media have been developed. For example, the Kaiser Family Foundation supported development of THE CONVERSATION / LA CONVERSACIÓN to address information needs about the COVID-19 vaccines in the Latinx community with new videos featuring doctors, nurses, and promotoras in English and Spanish [18]. --- Marketing and Outreach Campaigns Specific states led their own efforts such as the "Vacunate, Es Segura…" campaign, based on a partnership between the Idaho Commission on Hispanic Affairs, Community Council of Idaho, Consulate of Mexico in Boise, and the Idaho Hispanic Chamber of Commerce [15]. This program provided clear vaccine access information directly into the local context for Latinx communities in Idaho. While the campaign communicated overall COVID-19 vaccine benefits, it also amplified information on accessing the vaccine including appointment bookings. As vaccine hesitancy continues to demotivate community members, other issues regarding vaccine development and requirements for eligibility concern Latinx peoples. The Californian partnership between the Kaiser Family Foundation, UnidosUS, and 10 Latinx health professionals aimed to clarify details on citizenship, health insurance, and vaccine safety through video promotion efforts on social media [16]. In Wisconsin, another Mid-Western state, Forward Latino and Latinx physicians united under the "Por Mi Familia" campaign to address vaccine hesitancy through a multimedia campaign. Funded with $50,000, the social media presence was supplemented by television ads and print media. Multiple campaigns with the Colorado state government pulled businesses that largely employ Latinx workers to directly provide educational materials, promote awareness ads, engage with people for testimonials, and connect workers with community workers to ask questions directly [19]. Similar efforts were seen in Kansas under the "Por Los Nuestros" vaccine campaign, which included the Kansas Hispanic and Latino American Affairs Commission and the Kansas Department of Health and Environment [12]. Smaller, local-led efforts in Ulster County, NY, illustrated the ways in which small counties can still provide strong outreach [20]. Healthcare providers, community stakeholders, and even religious leaders were united to provide Spanish education on COVID-19 vaccine information and access. Between the multiple agencies, Latinx community members could also access food assistance to address an important need and provide more opportunities to discuss the vaccine. This was an off-shoot to a Spanish-language Facebook Live event that brought a popular Spanish television personality to moderate. --- Social Media Campaigns The objectives of several social media campaigns centered on building trust, promoting vaccine information, and using the Spanish language [9, 11,14,15,17]. Community leaders and healthcare professionals are provided the opportunity to dispel myths with medical facts through trusted channels and to a broad audience. The different slogans for these campaigns include "#VacunateYa" [14], "Vacunate, Es Segura. Es Gratis. Funciona" [15], "Por Mi Familia" [13], "La Conversación" [16], and "Por Los Nuestros" [12]. Local organizations led most of these campaigns. In Georgia, the Latino Community Fund has leveraged testimonials from community educators to invite questions about the vaccine [9]. They profiled Spanish speakers who can share their own hesitancies while also providing evidence that speaks to the benefits of the vaccines. Other campaigns were more targeted -physicians and Latinx-serving organizations in Los Angeles and Orange County launched #Vacu-nateYa to address the need for local vaccine materials in Spanish [14]. In-person distribution of flyers supported the campaign, focusing on farmworkers in the area that may not encounter virtual COVID-19 vaccine promotion. Support for vaccine appointment bookings and eligibility clarification can occur on the spot if organizations invest in physical promotional events that supplement social media promotion. The organizers of #VacunateYa continued to push an additional campaign as a call-out to Facebook and advocated for improved monitoring of misinformation in Spanish. Moreover, while there is no clear connection between the movement and a change in Facebook's priorities, the Center for Disease Control created "Mi Chat Sobre Vacunas COVID," a chat through WhatsApp that allowed users to access information on the vaccine and nearby vaccination sites, transportation options, and answers to common questions [21]. Additionally, in El Paso, Texas, another local effort was able to provide additional resources through the El Paso health department's social media campaign [10]. The Director and 40 other public health officials circulated vaccine information in collaboration with community educators and physicians. They were able to provide additional support for farm workers to explain how to register for appointments and create appointments themselves. Dubbed the COVID-19 Education Task Force, their site engaged more than 15,000 people in April and has been in contact with over 40,000 people. Indeed, government agencies and public health departments were major influencers in the creation and promotion of some social media campaigns. Yet, sustaining these efforts requires considerable community involvement and ongoing engagement. The joint resourcing was strengthened by greater personnel capacity and communication channel access. For instance, a national campaign between UnidosUS, the U.S. Department of Health and Human Services, and several public and private organizations aimed to increase vaccine trust and intention through multilevel educational outreach [11]. Their approach grounded social media as a major tool to promote other COVID-19 educational sessions, including neighborhood canvassing, online events, and distribution of materials. By partnering with local leaders, local and state governments coordinated a multi-state mobile educational tours to engage with Latinx communities. To date, outcomes such as extent of reach and trajectory of vaccine uptake have not been reported. --- Radio Stations Radio stations provide an alternative method to disseminate information among Latinx communities [5,8,11,13]. Spanish radio stations have a reliable base of Latinx, which inspired Florida-based WeCount! to create short public service announcements [8]. These advertisements can be intermittently added to different radio segments throughout the day. They also promoted workshops and organizations that can provide answers to common questions. In Southern Washington, the Benton Franklin Health District invested $30,000 into a radio campaign that promotes COVID-19 vaccine uptake in Spanish [5]. The planning and promotion of the radio programming included the mayor of Pasco and other trusted Latinx community leaders. They expected that the inclusion of well-trusted persons would improve the credibility and trustworthiness of the information for Latinx community members. --- Phone Lines Online booking has been a barrier for many Latinx people, as most vaccine registrations, details on eligibility, and other communications are primarily sourced on websites. Telephone hotlines are easily accessed through the phone by community members that are challenged with internet access issues or low familiarity with the internet. The state of Colorado created a non-stop hotline specifically for the vaccine where call center workers spoke multiple languages [19]. Moreover, a reverend in St. Paul, Minnesota, worked with WellShare to create its own hotline locally to support vaccination appointment bookings and transportation to vaccination sites [22]. The significance of religious figures is demonstrated through Latinx callers who have faithbased questions on the vaccine and are looking for spiritual support. The Maricopa County Department of Public Health provided evidence to the success of phone lines to increase vaccine uptake [23]. They added an option to speak to a Spanish-speaking worker in their 2-1-1 COVID-19 information line. Without any advertising, the program had 1,160 calls and 648 vaccine appointments in 10 days. Over half of these calls were specifically used to assist in booking appointments, while the rest allowed Latinx callers to ask questions and gather information on vaccination logistics. --- Promotor/Promotora The promotor/promotora program is well-established in Latinx communities; it refers to community workers being the primary messenger to communicate health information. Anyone can be a promotora, as it is loosely defined by each organization. It commonly describes a Spanish-speaker who is well known in the local Latinx community and can provide accurate, trusted health information. The program has been used to provide accurate COVID-19 vaccine safety information and logistical support in Arizona, Minnesota, Colorado, Maryland, California, North Carolina [19,[22][23][24][25]. Key partnerships between Latinx-serving organizations, public health departments, and other government agencies are critical as they combine resources, personnel, and information to strengthen outreach efforts [22,25]. The use of promotors allowed the penetration of COVID-19 information in ways that virtual efforts cannot influence. For instance, CASA health promotors in the suburbs of Maryland were able to physically communicate with Latinx community members in close by locations, including outdoor churches, malls, and farmer's markets, even while social distancing was enforced [26]. In North Carolina, La Semilla used their audience as a faith-based organization in partnership with Duke Health to book 500 first-dose vaccine appointments in 3 hour [25]. They were able to provide food to community members as the organization provided education on individuals' privacy when receiving a vaccine. --- Communication Resources Responding to the health sector's call for support by community agencies, many Latinx-focused organizations created bilingual and culturally relevant resources, videos, toolkits, email templates, event ideas, and lists of support services [24,27,28]. These could be used by other government agencies, businesses, public health units, and all other stakeholders. --- Access --- Recommendations and Expert Opinion "Pop-up clinics" often refer to non-traditional clinics that have modified locations, time of operation, recruitment strategies, personnel, and other characteristics. Primarily, advocates prioritize the setting of the pop-up clinics and suggest that they should be located near Latinx neighborhoods, workplaces, or common locations that they visit [5,29]. Indeed, it is suggested that people should not need a car to attend vaccination events. The familiar places provide trust, which is especially important given the concerns on vaccine safety, ICE presence, and immigration refusals [5,30]. Policies and logistical details of vaccine clinics may also be barriers to vaccine uptake for many Latinx peoples. Pop-up initiatives should be available in the evening and weekends. Other best practices include limited registration requirements, avoiding online-only communication, and training staff to provide extra support to Latinx clients [5,25]. For instance, some Latinx people may prefer to note their names on paper or read the laptop screens to clarify spelling [25]. --- Intervention Details Workplaces, shopping malls, farmer's markets, and nearby grocery stores were popularly used to vaccinate Latinx community members [26,31,32]. To attract Latinx workers, processing companies have partnered with health organizations in Washington State [5]. Similar clinics in Washington state were available until the evening and weekends [5]. While in California, grassroots organizations have focused on farmworkers and undocumented immigrants, bringing mobile clinics to the agricultural sites for Latinx staff [33]. The Sacramento Latinx community was able to receive a vaccine at the Consulate of Mexico parking lot [34]. Other stakeholders designed more formal methods that integrated Latinxserving community organizations as liaisons for healthcare centers. Two Maryland counties reserved vaccination slots at a vaccination clinic specifically for CASA and the Latino Health Initiative. Furthermore, a privately and publicly funded vaccine clinic in San Francisco was set up near a major transportation hub, which had a maximum capacity of 120 vaccinations per day [35]. Local organizations conducted outreach and promotion to Latinx communities. There have been creative elements to pop-up clinics for Latinx that specifically address common concerns or barriers in Latinx communities. In response to the widespread fear of ICE at health spaces, Texan pop-up efforts modified their pop-up event by hiring disguised private security to limit anxieties related to law enforcement [33]. Moreover, recruitment and promotion strategies were particularly innovative. A live music event and a Spanish vaccine-themed art exhibit in California was designed to attract Latinx community members [36]. Unidos leveraged a soccer tournament with food stands to attract people to receive vaccines in Philadelphia, which occurred after regular Sunday church service [30]. Certainly, church services were commonly used as ways to promote vaccines with Latinx communities [37]. Some interventions were able to provide data relevant to vaccine uptake -these interventions centered on a change in setting to increase access for Latinx communities. For instance, Nassau Woods is a small Georgian community comprised of 300 families where multiple organizations partnered to deliver vaccines in their neighborhood [38]. There were at least 24 first-dose vaccinations that were confirmed at the time of reporting. In Chattanooga, Tennessee, the Hamilton County Health department and a local grocery store, La Carniceria, created pop-up vaccine clinic at the store entrance [31]; 33% of the local Latinx community were able to access this initiative, with a total of 45 vaccines administered in 1 day. Additionally, Novant Health team stationed at Compare Foods grocery stores in Charlotte, North Carolina [32]. They managed to vaccinate 101 Latinx individuals through walk-up visits. In San Francisco, in the most comprehensive report to date, a community-academic-public health partnership, "Unidos en Salud," implemented a multi-component, "Motivate, Vaccinate, and Activate" community-based strategy addressing barriers to COVID-19 vaccination for the Latinx population . To summarize, the prototype outdoor, "neighborhood" vaccination program was located in a central commercial and transport hub in the Mission District in San Francisco during a 16-week period from February 1, 2021, to May 19, 2021. Programmatic data, citywide COVID-19 surveillance data, and a survey conducted between May 2, 2021, and May 19, 2021, among 997 vaccinated clients ≥ 16 years old were used in the evaluation. There were 20,792 COVID-19 vaccinations administered at the neighborhood site during the 16-week evaluation period. Among vaccine recipients, 70.5% were Latinx, 76% with an annual household income less than $50,000, 60% were first-generation immigrants, and 62% did not have access to a primary care provider. The most frequently reported reasons for choosing vaccination at the site were its neighborhood location , easy and convenient scheduling , and recommendation by someone they trusted ; approximately 99% reported having an overall positive experience, regardless of ethnicity. Notably, 58.3% of clients reported that they were able to get vaccinated earlier because of the neighborhood vaccination site, 98.4% of clients completed both vaccine doses, and 90.7% said that they were more likely to recommend COVID-19 vaccination to family and friends after their experience; these findings did not substantially differ according to ethnicity. There were 40.3% of vaccinated clients who said they still knew at least one unvaccinated person . Among clients who received both vaccine doses , 91.0% said that after their vaccination experience, they had personally reached out to at least one unvaccinated person they knew to recommend getting vaccinated; 83.0% of clients reported that one or more friends, and/or family members got vaccinated as a result of their outreach, including 18.9% who reported 6 or more persons 1 3 got vaccinated as a result of their influence. In summary, this multi-component, "Motivate, Vaccinate, and Activate" community-based strategy addressing barriers to COVID-19 vaccination for the San Francisco Mission District's Latinx population reached the intended population, and vaccinated individuals served as ambassadors to recruit other friends and family members to get vaccinated. --- Discussion While COVID-19 vaccine uptake among Latinx persons has drifted upwards since the first 8 months [1,40], this was viewed as related to the introduction and spread of the Delta variant which impacted the burden of infections, hospitalizations, and death borne, again, differentially in Latinx communities [40,41]. In May 2021, Latinx persons were the most eager among all other racial/ethnic groups to get vaccinated, but rates of uptake lagged these other groups. This discrepancy highlights a fundamental issue for Latinx communities, namely, delayed access to vaccine. Interventions that address communication and improve access to COVID-19 vaccines are a major part of increasing vaccine uptake and protecting Latinx communities from COVID-19. A variety of strategies have been summarized here to increase cultural tailored credibility and convenience for the vaccine while offsetting complacency. This work is delivered through partnerships between communities, academia, and local health departments for outreach education via town halls, promotora, social media, and then to make vaccine convenient to access through pop-up sites at workplaces and social gatherings. What might be suggested from these mostly descriptive reports is that efforts to address delayed COVID-19 vaccine uptake appear to be reactive, becoming operational after vaccine becomes locally available. Also, the summary of interventions for many localities comes across as a collection of separate individual efforts rather than coordinated and comprehensive programming. Work could be done during the vaccine's development phase to prepare the community, establish the infrastructure, and be ready to deliver prior to vaccine distribution to address issues of community concerns and access. Delays in development and implementation of these strategies endanger lives. This review is not without limitations. The rapid format captures only what has been reported within the first 8 months of vaccine availability and detailed peer reviewed information was limited. Program evaluation was uncommon. The advantage of the rapid review is to retrieve and focus information in real time on what was being considered, organized, and implemented early. Google search limited to the top 50 articles per search term could be overly selective, missing important views and examples. The search terms that were selected also limited the scope of the review to community engagement strategies and other themes such as structural impediments were not included. Health equity is an essential consideration when building vaccination programs and a wider review is warranted. Another limitation is that the number and nature of the reporting precluded differentiated descriptions by Latinx subgroups or region; future work will need to address this. In summary, this report raises the import of constructing a coordinated comprehensive strategy to produce, maintain, and sustain health equity in Latinx communities. Some of the points raised here are critical now for COVID-19 vaccine uptake, but more importantly point toward preparation and response to future health crises. --- Author Contribution All authors contributed to drafting and/or editing the manuscript. All authors have approved the final version. --- --- Conflict of Interest The authors declare competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Latinx people in the USA have had a high burden of COVID-19 cases, hospitalizations, and death, yet rates of COVID-19 vaccine uptake among Latinx individuals were lower than other demographic groups. Effective strategies to promote vaccine uptake among Latinx communities are needed. We conducted a rapid review of information available between December 2020 and August 2021. Our search strategy used PUBMED, Google, and print media with a prescribed set of definitions and search terms for two reasons: there were limited peer-reviewed studies during early period of roll-out and real-time perspectives were crucially needed. Analyses included expert opinion, descriptions of project implementation and outcomes. We found that approaches varied. An integral component with all interventions was the use of local Latinx community leaders. They could understand the nuances of vaccine hesitancy, access issues, and structural inequities experienced by Latinx communities. The mechanisms for messaging included the use of social media, radio, and promotora outreach workers to disseminate information about COVID-19 vaccines and counter misinformation. Phone hotlines for scheduling were reported. Promoting access involved pop-up clinics at shopping malls, farmer's markets, and nearby grocery stores which were popularly used to vaccinate Latinx community members. Other practices included limited registration requirements, avoiding online-only communication, and training staff to provide specialized support to Latinx clients. This rapid review provides a basis for developing strategic implementation to increase COVID-19 vaccine uptake in this ongoing pandemic and planning to promote health equity for future bio-events and health crises.
Introduction Previous research with older adults indicates that the relationships between health and physical and psychological functioning is complex . For example, some older people with fragile health experience relatively high levels of life satisfaction, an important indicator of mental well-being. However, in other older adults, poor health is associated with limited physical and psychological functioning . Such research provides a platform for examining the notion that advancing age and its associated loss of health and functioning is not always related to decrements in psychological well-being. In a series of studies, Smith andBaltes and Gerstorf et al. identified cluster profiles of German older adults based on variables representing health and functioning. The profiles incorporated measures from various domains . Their results revealed nine different cluster groups demonstrating varying relationships between health and functioning. In describing demographic differences between the profiles, Smith and Baltes found that older age and female gender tended to be overrepresented in clusters classified as displaying relatively poor functioning. In a study conducted with Swedish older adults, Borglin et al. examined cluster profiles. Borglin et al. described the cluster groups on a range of variables including self-rated health, health problems, physical activity and social support. The authors identified three different groups varying in their levels of quality of life . Their results showed that approximately one-third of the sample displayed relatively low levels of quality of life, and they tended to be older females with poor self-rated health and numerous health problems, and were less physically active than participants in the remaining cluster groups. The aforementioned studies are important in indicating the presence of optimal and less optimal functioning in older adults. Whilst these studies have discriminated the identified clusters on a range of health, behaviours and personal characteristics, additional important variables both in terms of forming, validating and discriminating, the clusters need to be explored. Here, it may be particularly important to concentrate on modifiable characteristics that can be targeted in future lifestyle and wellness interventions. These variables include Body Mass Index , self-reported health and health conditions, functional limitations, depressive symptomatology, life satisfaction, selfesteem, social functioning and health behaviours . BMI is important to include in a cluster solution as previous research has documented a negative relationship between BMI and indicators of physical functioning, such as the ability to carry out everyday tasks including lifting/ carrying groceries, climbing several flights of stairs and walking several blocks . However, there is inconclusive evidence with regard to the relationship between BMI and indicators of psycho-social well-being in older adults, largely due to the limited number of studies examining such relations. Perceptions of health are one of the main factors influencing older adults' quality of life . Borglin et al. found that the presence of health conditions tended to cluster with poor health and low quality of life. Functional limitations may be a consequence of the existence of health problems and have also been shown to be related to psychological health, including depression . With regard to depressive symptomatology, Smith and Baltes found that this variable tended to cluster with anxiety, loneliness as well as physical frailty. These findings illustrate the likelihood of co-existence of low levels of physical and psychological functioning. In contrast to the evidence suggesting that depressive symptomatology clusters with low levels of physical functioning, in their study with older German adults, Smith and Baltes identified that about 10% of participants experienced concurrent physical illness yet relatively high levels of life satisfaction. Their finding provides some empirical evidence for the notion that ageing-associated declines in physical health do not always equate with low levels of well-being in older adults. The role of self-esteem has not been considered previously when describing typologies of older adults based on function and well-being variables. This is despite evidence suggesting that self-esteem can protect against the development of depression in older adults , while low self-esteem is a positive predictor of poor selfcare in older people and poor health behaviours following illness in older people who have had a heart attack . The older people become the more loneliness they are likely to suffer as a result of losing lifelong partners and friends . In addition, loneliness seems to display significant relationships with both the presence of physical illness, physical functioning and depression . In contrast, the presence of high levels of social support, as a positive indicator of social functioning, appears to be experienced by those older adults who also report high levels of quality of life . Engagement in health behaviours can be either determinants or outcomes of well-being and functioning. With regard to alcohol consumption, research by Lang et al. suggests that moderate alcohol consumption , not abstinence, is associated with improved subjective well-being and lower levels of depressive symptomatology in a large group of British older adults. Whether such findings generalise to older adults from other nationalities is presently unknown. Further, smoking status and leading a physically active lifestyle have been shown to be related to changes in health among older European adults . Taken as a whole Haveman-Nies et al. showed that while self-rated health and selfcare ability generally deteriorated over a 10-year period in all older adults, being a non-smoker and physically active delayed that deterioration in 70-75 years olds. However, it was also apparent that the results differed across gender groups. For women, only engagement in physical activity was related to a delay in the onset of functional dependence, whereas for men, both lifestyle behaviours were important. Finally, evidence also exists to support the notion that engagement in health behaviours tend to cluster, such that people are likely to engage in numerous behaviours concurrently . It has been suggested that engagement in several health risk behaviours may lead to an elevated risk of disease that is larger than what can be expected from the addition of the individual risk factors . However, to date, the only study to consider the clustering of multiple health behaviours was conducted by Chou with a large representative sample of Hong Kong Chinese older people aged sixty and older. Chou revealed, for example, that males, older age groups and the more well educated were most likely to smoke and drink heavily, be physically inactivity and consume low levels of vegetables and fruits. Given the relative paucity of research in this area with older adults and the fact that the study by Chou was conducted using a sample of Chinese older adults, more research is needed using older adults from other parts of the world. Apart from the range of modifiable characteristics described above, some demographic variables are also important to consider and were therefore included as variables used to discriminate between the clusters. These include age, gender, education and nationality. This is because previous research has documented the importance of considering each of these variables in assessments of health and functioning . Extant research which has identified cluster groups of older adults in this area of work have used samples from individual nations . Thus, it is currently unknown the extent to which certain typologies are more likely be identified in certain countries. Taking a European perspective, previous research has shown that older people in Southern Europe report more disabilities and depression than their Northern European counterparts. Thus, it is possible that the distribution across health and wellbeing profiles may differ across Northern and Southern Europeans. In view of the above, the purpose of the present study was to identify health and well-being typologies among a sample of European older adults and describe various demographic, social and health behaviour characteristics of such groups. We expected to identify groups differing with regard to health and functioning characteristics. We also expected at least one group of older adults would display concurrent high levels of both health, physical and psychological functioning. Conversely, we hypothesised that at least one cluster would emerge characterised by concurrent poor health, physical and psychological functioning . Second, based on previous research, we hypothesised the following characteristics to be overrepresented in relatively well functioning clusters: younger age groups, male gender, Northern Europeans, high levels of education, lack of social isolation, consumption of moderate amounts of alcohol, non-smoking status and relatively high levels of walking behaviour. The opposite pattern of results would be evident in cluster groups consisting of older adults displaying poorer functioning . --- Methods --- Participants and procedure To represent the North-South divide in the EU, participants were selected from six European Union countries; England, Sweden, Finland, Estonia, Greece and Italy. Participants constituted a convenience sample from the largest or second-largest city in each country . To be included in the study, participants needed to be residing in an urban area, be aged 65 and above, physically mobile and be able to read and write in the official language of the country in which the questionnaire was administered. The data were collected during the spring of 2008. Initially, the coordinator for each participating country drew up a list of places in the community they believed based on experience, older adults would frequent . The list differed slightly across the participating countries as it was acknowledged that the list should be culturally sensitive . The investigators also made use of personal contacts they had from previous research conducted using older adults. Based on the list constructed, trained research assistants in each participating country sought out at least five different sites from each location identified over 2 weeks between 10 a.m. and 2 p.m. and approached older adults in person. The RA introduced him/herself and explained the nature of the study. he checked that each person approached fulfilled the inclusion criteria and only then asked them for their willingness to complete a questionnaire. All the participants provided written informed consent prior to taking part in the study. A small table was available for participants to use when completing the questionnaire, and the completion was supervised by the RA. Thus, the participants had opportunities to ask questions. The ethical guidelines of psychological societies in each of the countries were adhered to throughout. The RA noted down the number of older adults approached who fulfilled the inclusion criteria and the number agreeing to complete the questionnaire to calculate the response rate . Taken as a whole, the response rate was 75.6% . The sample constituted 1,381 participants. Following the list-wise deletion of outliers , the distribution across the various countries was: England: n = 247, Nordic countries : n = 205, Estonia: n = 175, Greece: n = 342 and Italy: n = 372. The study had the approval of the University Ethics Committees for each participating country. Participants had a mean age of 73.7 . Taken as a whole, females constituted 63.1% of the total sample. More than half of the participants were married, while 29.2% were widowed. We compared demographic characteristics of our sample with those of previous representative data sets . The samples were broadly similar except that our sample somewhat overrepresented females, and the Estonian subsample was somewhat better educated compared to a sample from an existing representative survey . --- Measures While some instruments were already available in each of the languages , the remaining were translated from English to the relevant language by researchers within the research team. In addition, another person with expert knowledge of the relevant language and English translated the questionnaire back into English. Discrepancies between the translations were identified, and the wording was changed as necessary until consensus was reached. --- Body mass index Self-reported height and weight was used to determine BMI using the standard formula weight /height 2 . --- Health variables Self-reported health was measured using one item with response options ranging from 'very bad' to 'very good'. Further, the participants were asked to indicate which of the following health conditions they had experienced in the past 12 months: high blood pressure, heart trouble, stroke, bronchitis, asthma, arthritis, diabetes, cancer, circulatory problems, emphysema, osteoporosis, cataracts and glaucoma. For the purpose of the main analysis, a simple count of health conditions was performed to be entered in the cluster analysis. The health conditions selected were those listed by Balfour and Kaplan . --- Functional limitation Based on Nagi's conceptualisation of functional limitations, the participants were asked to rate their levels of difficulty performing nine tasks. The tasks included pushing a large object, lifting a weight of more than 10 lb , reaching the arms high above the shoulders, writing or handling small objects, stooping or crouching, getting up from a chair, standing in one place for more than 15 min, walking a quarter mile and walking up a flight of stairs. Participants were said to have severe difficulty with a given task when they reported either a lot of difficulty or that he/she was unable to do the task without help. The tasks in which the participants reported such severe difficulty were then summed to provide a score on functional limitation with higher scores representing more functional limitation. Such a calculation of functional limitation has previously been used by Balfour and Kaplan . In the present study, the internal reliability coefficient for the scale was a = 0.88. --- Depressive symptoms The Centre for Epidemiological Studies Depression scale was used to assess depressive symptoms in the past week. The scale consists of 20 items with overall scores ranging between 0 and 60. An example item includes 'I felt tearful'. This instrument has received support with regard to its validity and reliability in older populations . The internal consistency was high in this study. --- Life satisfaction The Satisfaction With Life Scale was used to measure global life satisfaction. This five-item questionnaire is presented with scales ranging from 1 to 7 . The questionnaire has been widely adopted, and high levels of reliability and validity have been reported . The internal reliability coefficient for the scale in the present study was a = 0.86. --- Self-esteem Rosenberg's self-esteem scale is a 10-item scale measuring global self-esteem. An example item is 'I feel that I'm a person of worth, at least on an equal plane with others' and items are scored on a fourpoint scale with response options ranging from 1 to 4 . The questionnaire is unidimensional , well validated and previous research studies with older populations have identified adequate internal reliability coefficients . In the present study, the internal consistency of the scale was a = 0.82. --- Demographic characteristics Apart from ticking a box representing their gender, the participants were asked to indicate their age by providing their date of birth. Further, a categorical variable was created in which the participants were asked to tick the response representing their highest level of education . --- Social isolation One item was used as one measure of the structural aspect of social networks, based partly on the conceptualisation of Seeman and Syme . In effect, this item measures the extent to which the participants could be classified as being socially isolated. Specifically, the participants were asked about the number of close friends and relatives seen at least once per month and response options included 'less than 2' and '2 or more'. --- Health behaviours Three different health behaviours were assessed: alcohol consumption, smoking status and physical activity . Both alcohol consumption and smoking status were measured with single items. Specifically, to assess alcohol consumption, the participants were asked to indicate whether they consumed 'none', '1 per day or less' or 'more than 1 per day' each month. The item measuring smoking status was taken from Balfour and Kaplan , in which the response options included 'non-smoker', 'former smoker' and 'current smoker'. Finally, with regard to walking behaviour, the relevant section from the short version of the International Physical Activity Questionnaire was used. This questionnaire was chosen because it has been extensively validated across a range of general adult population groups and, more recently, also in older adults . Specifically, the participants are asked on how many days in the previous 7 days they have walked for at least 10 min at a time, and subsequently for those who respond 1 or more, they are asked for how many hours and/or minutes per day. A total walking score expressed as MET/minute per week is then calculated based on the formula: 3.3 * minutes per day * days per week. --- Data analysis Cluster analysis is a standard analytical tool for identifying typologies . It is a useful tool in identifying distinct groups of people who are similar on selected variables and unlike those from other groups . This method has been effectively used in previous research with older adults . To identify the typologies, we first conducted a hierarchical cluster analysis, followed by a k means confirmatory cluster analysis. In order to validate the final cluster solution, Aldenderfer and Blashfield suggested that the derived clusters should be compared on variables not used in the cluster solution. It is important here to choose measures that are important conceptually as well as practically. Therefore, a one-way MANOVA was conducted with life satisfaction and self-esteem serving as the dependent variables and the cluster solution representing the independent group variable. Finally, we conducted a series of Pearson v 2 analyses and a one-way ANOVA to examine demographic, social and lifestyle behaviour differences between the cluster groups. Prior to the main analyses, all multivariate and univariate outliers were deleted . The remaining cases were then standardised . Initially, a hierarchical cluster analysis was conducted in order to explore the nature of the clusters. We used the Ward method because it avoids the problem of 'chaining' associated with other methods . The similarity measure adopted was the squared Euclidean distance. The agglomeration schedule coefficients were examined to determine the number of clusters. Hair et al. have specified that small coefficients indicate that rather homogenous clusters are being merged, while large coefficients suggest that clusters with dissimilar members are being combined. Thus, fairly large increases in the coefficients between two adjacent sets were inspected to decide the number of clusters. In line with previous research , we used z scores of ±0.50 as the criterion for interpreting whether the participants scored relatively high or low on given variables compared to members of other clusters. --- Results The results of the hierarchical analysis suggested a fourcluster solution. In line with recommendations by Hair et al. , the centroid values from the hierarchical analysis were used as the initial seed values for the confirmatory k means analysis. The results revealed final centroid values that were similar to those obtained from the hierarchical analysis. The results are depicted in Fig. 1. The four clusters were subjectively labelled 'good health and moderate functioning' , 'moderate health and functioning' , 'obese and depressed' and 'low health and functioning' . In order to validate the final cluster solution, Aldenderfer and Blashfield suggested that the derived clusters should be compared on variables not used in the cluster solution. It is important here to choose measures that are important conceptually as well as practically. Life satisfaction and self-esteem were chosen to represent the benchmark variables. A one-way MANOVA was conducted to examine differences between the four cluster groups in these variables . The MANOVA was significant = 28.1; p \ 0.001; partial g 2 = 0.07). The univariate analysis revealed that the cluster groups differed significantly in both life satisfaction = 52.5; p \ 0.001; partial g 2 = 0.12) and self-esteem = 26.5; p \ 0.001; partial g 2 = 0.07). Specifically, post hoc tests revealed that the 'good health and moderate functioning' group had significantly higher levels of life satisfaction and selfesteem than the remaining groups. Further, the 'moderate health and functioning' cluster had higher levels of life satisfaction than the 'obese and depressed' and 'low health and functioning' groups as well as higher levels of selfesteem compared to the 'low health and functioning' cluster. Finally, the 'obese and depressed' had significantly higher scores on life satisfaction, but not self-esteem, than the 'low health and functioning' group. In addition to identifying and validating the clusters, we examined differences between the cluster groups in age, gender profiles, countries of origin, education profiles, social isolation and health behaviour . With regard to gender, a v 2 analysis = 27.4; p \ 0.001) revealed that males were overrepresented in the 'good health and moderate functioning' and 'moderate health and functioning' clusters and underrepresented in the other two cluster groups. In contrast, females were underrepresented in the former two cluster groups but overrepresented in the 'obese and depressed' and 'low health and functioning' groups . The differences in distribution between the countries were uneven = 136.2; p \ 0.001; see Table 3). Specifically, the British subsample was overrepresented in the 'good health and moderate functioning' cluster but also slightly in the 'low health and functioning' group. This sample was underrepresented in the other two Fig. 1 k means analysis on health and functioning clusters . Participants in the Nordic countries seemed to demonstrate the most favourable profile as they had a higher than expected count in the 'good health and functioning' and 'moderate health and functioning' clusters, while being underrepresented in the 'obese and depressed' and the 'low health and functioning' groups. A higher than expected count of Greek older adults belonged to the 'obese and depressed' cluster, but fewer than expected in the 'low health and functioning' and 'moderate health and functioning' groups. In contrast, the Italian older adults were overrepresented in the 'low health and functioning' and the 'moderate health and functioning' cluster, while being underrepresented in the 'good health and Z M SD Z M SD Z M SD Z M SD Life satisfaction 0.4 5.3 a 1.2 0.01 4.8 b 1.3 -0.3 4.3 c 1.4 -0.6 4.0 d 1.3 Self-esteem 0.3 3.3 a 0.4 -0.1 3.1 b 0.4 -0.2 3.0 b,c 0.5 -0.4 3 c 0.5 Note: Superscripts with the same letter in the same row do not differ at p \ 0.05 moderate functioning' cluster. The Estonian older adults displayed a similar profile to the Italians, except that they were also slightly overrepresented in the 'obese and depressed' cluster. With regard to education profiles, the distribution among the clusters was likewise uneven = 59.7; p \ 0.001; see Table 3). Specifically, participants with primary education were overrepresented in the 'obese and depressed' and the 'low health and functioning' clusters, while they were underrepresented in the remaining cluster groups. The opposite pattern was evident for those with secondary education, so that the 'good health and moderate functioning' and 'moderate health and functioning' participants were more likely to have secondary education than those in the other clusters. Finally, those who had taken Further/Higher Education were overrepresented in the 'good health and functioning' group but underrepresented in the other groups. While most participants reported relatively low levels of social isolation, the distribution among the cluster groups was unequal = 11.1; p \ 0.05; see Table 3). Specifically, more participants in the 'moderate health and functioning' and the 'low health and functioning' groups tended to see less than two close friends or family members per month, while participants in the 'good health and moderate functioning' cluster group were more likely to report low levels of social isolation. With regard to alcohol consumption, again the distribution was uneven = 46.6; p \ 0.001) with less than expected participants in the 'good health and functioning' group consuming no alcohol, while more than expected in the remaining groups reporting to consume no alcohol . In contrast, participants in the 'good health and functioning' cluster were overrepresented in the one per day or less and the more than one alcoholic unit per day group. The participants in the remaining cluster groups were less likely to consume any alcohol. No significant differences emerged in smoking status between the cluster groups = 10.0; p [ 0.05; see Table 3). For age, the ANOVA was significant = 24.1; p \ 0.001; partial g 2 = 0.06). Post hoc tests revealed that the 'moderate health and functioning' participants were significantly older than participants in the 'good health and moderate functioning' and the 'obese and depressed' clusters. Further, and not unexpectedly, the 'low health and functioning' participants were significantly older than participants in the other groups. Finally, to examine differences in walking behaviour between the cluster groups, a one-way ANOVA was conducted. The ANOVA was significant = 10.7; p \ 0.001; partial g 2 = 0.03). As expected, the post hoc test results revealed that the 'low health and functioning' cluster engaged in significantly lower levels of walking than the remaining groups . --- Discussion The cluster analysis conducted in the present study allowed us to identify older adults who were more or less well functioning in different ways. The findings revealed four distinct clusters which were externally validated by examining differences in well-being variables not used to create the cluster solutions. In support of H1, both physically and psychologically well-functioning, and low functioning, clusters were identified. These clusters were distinguished on levels of life satisfaction and self-esteem. The results demonstrate heterogeneity with regard to health and physical and psychological functioning in older adults. Although some different variables were used in the series of studies by Smith andBaltes and Gerstorf et al. , these authors identified levels of differential ageing, for example, by identifying clusters that had positive , average and a mix of negative and positive characteristics . This is consistent with the results of our study. However, their studies were conducted using a sample of German adults aged 70 and over, 14% of whom were institutionalised. One of the main contributions of the current study to extant literature is that we used a sample of European community-dwelling adults and we thus provided a more international perspective to profiles of differential ageing. In contrast to previous research, we included BMI as a cluster variable. Interestingly, our results showed that depression was related to BMI, which adds to the limited research evidence relating BMI with psycho-social health in older adults. It could be argued that participants in the 'obese and depressed' cluster represented the second least favourable group with regard to overall functioning, given not only their overweight and levels of depression, but also their low levels of life satisfaction and self-esteem. However, curiously, this cluster group did not seem to be struggling with poor perceived health or limitations in physical functioning. This is not wholly consistent with prior research suggesting that obesity is associated with poorer physical functioning and lower levels of physical health in older adults. This result clearly demonstrates heterogeneity with regard to functioning in obese older adults. In support of H2, our results demonstrated that more well-functioning cluster groups generally displayed demographic, social and lifestyle characteristics which other previous research has shown to be associated with health and well-being . With regard to national differences, our results were concordant with previous research, demonstrating that older adults from Northern Europe generally display fewer functional limitations and disabilities and lower levels of depression than their Southern European counterparts. Taking a critical lens to these results, it might be important in future research to ascertain to what extent such discrepancies between nations may be due to differences in cultural perceptions and reporting of symptoms. Indeed, one of the main limitations of the present study is that we relied on self-report data to measure elements of functioning. The fact that participants in the least well-functioning cluster group were generally older than those in the other cluster groups is also consistent with H2, and with previous research demonstrating that older groups are at greater risk of displaying poorer psychological and physical functioning than their younger older peers. Further, females greatly outnumbered males in this low functioning cluster and thus provides support to previous findings showing that women tend to experience greater levels of depressive symptomatology and display poorer self-rated health than men as they grow older . With regard to other socio-demographic factors, the majority of the participants in the 'low functioning' cluster had only elementary school education and were more socially isolated compared to participants in the other cluster groups; thus displaying unfavourable socio-economic profiles as expected based on the results of previous research . The participants in the 'low health and functioning' cluster also tended to report alcohol abstinence, nonsmoking status, and low levels of walking behaviour. With regard to alcohol abstinence and low reported walking, prior research findings suggest that these levels are associated with poorer functioning among groups of older adults . However, the results are not consistent with regard to smoking behaviour. Thus, while previous research with other groups of older adults has shown that health risk behaviours tend to cluster together , the results for the present sample did not fully support this idea. This is encouraging as other research has demonstrated a synergistic effect of lifestyle risk factors on deterioration in functioning . However, researchers should consider including more detailed measures of alcohol consumption and smoking status in future research. In addition, ideally, in future, researchers should attempt to incorporate objective measures of walking . --- Strengths and limitations The study builds on previous research in three main ways: by the inclusion of a range of physical and psychological function variables in the cluster solution and examine how these interact with BMI, using a sample of older adults from a range of EU countries and by the consideration of a range of demographic, social and health behaviour characteristics of each of such taxonomies. Taken together, this information can be used to inform the design of interventions aimed at enhancing quality of life in older adults across Europe. However, apart from the limitations associated with self-report data and possible differences in cultural interpretations of symptoms across nations, other limitations should be taken into account in interpreting the findings of the present study. First, we did not include a random sample of older European adults. While this would have been ideal, comparisons with existing representative surveys and national statistics showed that our sample was broadly similar to such data. However, women were generally overrepresented in our dataset, which limits the generalisability of our findings. Further, due to the crosssectional design of the present study, it is not possible to delineate the stability of cluster membership over time. In addition, we are not able to infer causality between perceptions of physical and psychological functioning. Indeed, previous research has documented evidence that physical function may predict depressive symptomatology , but also that levels of depression may affect risks of decline in physical functioning . --- Conclusions The results suggest that distinct health and well-being typologies can be identified among European older adults and that such groups differ with regard to socio-demographic and lifestyle behaviour characteristics. It would be particularly useful to examine in future interventions the extent to which the typologies identified in the present study would respond differently to various interventions designed to enhance physical and psychological functioning in older adults.
The purpose of the present study was to identify health and well-being typologies among a sample of older European adults. Further, we examined various demographic, social, and health behaviour characteristics that were used to discriminate between such groups. The participants were 1,381 community-dwelling adults aged 65 years and above (M age = 73.65; SD = 7.77) from six European Union (EU) countries who completed self-reported questionnaires. Hierarchical cluster analysis was initially conducted followed by a k means analysis to confirm cluster membership. Four clusters were identified and validated: 'good health and moderate functioning' (38.40%), 'moderate health and functioning' (30.84%), 'obese and depressed' (20.24%) and 'low health and functioning' (10.51%). The groups could be discriminated based on age, gender, nationality, years of education, social isolation and health behaviours (alcohol consumption and walking behaviour). The results of the study demonstrate heterogeneity with regard to the relationships between the variables examined. The information can be used in targeting older Europeans for health promotion interventions.
Introduction The SARS-CoV2 infection is the first pandemic in times of modern technological societies with such strong governmental precautionary measures. The pandemic has severe impacts on health and well-being, with a total of 11,470,637 global infection cases and 534,784 deaths in 188 countries on 6.7.2020 [1]. On the one hand, direct lethal outcomes have an impact on life, but on the other, governmental restrictions to keep the virus spreading under control also affected nearly all aspects of life and public health, e.g., daily traffic [2], or psychological and social aspects [3,4]. The importance of social and psychological factors for mental and physical well-being is well known, and one aspect is the effect of leisure activities on physical and mental health [5]. In this study, we used birders as an example for a non-consumptive nature-based outdoor leisure activity. Based on our knowledge, this is the first study assessing the lockdown impact on birders. In addition, it is one of few studying the COVID-19 pandemic impact on a leisure activity on a global scale by asking individuals about their birding behavior during the pandemic. Our study suggests that long lockdowns with strict regulations may severely impact leisure activities. The results can be applied with caution across other nature-based recreational activities. --- Theoretical Background --- Recreation Specialization Recreation specialization is concerned with a leisure activity that needs some affective, motivational, and cognitive effort [6,7]. Recreation specialization represents a continuum in behavior between the generalists with a low involvement and the specialists with a high involvement [6]; meaning that some people invest more time, effort, and cognitive resources than others. Scott and Shafer [7] defined a three-dimensional construct with a behavioral, a cognitive, and an affective component. Indicators of the behavioral component are equipment, previous participation, and experience. Indicators of the cognitive dimension include the level of competence and knowledge of the activity. Indicators of the affective component include lifestyle centrality and continued participation [7,8]. Centrality to lifestyle has a psychological and a behavioral component [8] and can be viewed as part of the larger construct of involvement [9]. Social-psychological involvement is a state of motivation, arousal, or interest regarding a product, an activity, or an object [10]. Birders can be referred to as experts in the field of identifying bird species. However, their skills may differ, e.g., between men and women . --- Birding as Leisure Activity Birding is an important nature-based recreation activity. For example, about 18% of Americans watch birds and about 36% of them carry out birding trips away from their homes [11,12]. The concept of recreation specialization has also been adapted to birders. McFarlane [13] used the factors "past experience", "economic commitment", and "centrality to lifestyle" to group birders into four categories: novice, casual, intermediate, and advanced birders. Hvenegaard [8] categorized birders into three distinct groups: advanced-experienced, advanced-active, and novices. Scott and Thigpen [14] identified four groups of birders, namely casual, interested, active, and finally, skilled birders. More colloquial definitions sometimes group people into birdwatchers, birders, and twitchers [15,16]. A special case has been described in birders that travel long distances to see rarities [17]. This behavioral commitment defines them as hard-core birders. Birding follows the dimensions of recreational specialization [6], including aspects of skill and knowledge, behavioral and personal commitment, stages of involvement [10], and centrality to lifestyle [9]. A recent study has highlighted that there is a significant positive relationship between recreational specialization and birder's travel intention [18]. Additionally, demographic factors may influence the specialization level of birders . Therefore, birding is an ideal nature-based recreation activity to study the influence of the COVID-19 lockdown on a common leisure and recreational activity, because it requires activity mostly outside in nature, travelling to birding areas, and it often includes a social component, such as meeting other birders. Voluntary birders also play an important role on large-scale biodiversity surveys and bird population monitoring. Birders around the globe contribute to the massive data collection that is used for scientific studies [19,20]. The effort of leisure birding to ornithological science is unparalleled and allows studies that could not be conducted by paid professionals on a large scale [20]. The knowledge about changes in birding behavior is important because it may have strong consequences for the data collection and long-term analyses of bird data. For example, a recently published study from South Africa stated that the COVID-19 governmental measures had a marked negative effect on the data collection for the Southern African Bird Atlas Project. Due to restricted mobility, there was a 70% decline in data reporting in April 2020 [21]. Since there is no previous comparable situation, this part of our research is explorative. --- Demographic Factors: Gender and Age Birding data suggests that men usually display a higher level of specialization. For example, male members of the Carolina Birding Club reported higher skills and more expensive birding equipment [22]. Scott and Thigpen [14] found that men participated more frequently in birding activity, travelled longer distances, and reported higher identification skills than women. However, there was no difference between genders in level of commitment. Men are also more competitive in their birding activities [15]. Other well-known psychological differences between men and women can contribute to gender-specific behavioral responses during the COVID-19 outbreak. First, men score significantly higher in risk-taking than women in many tasks as found in the meta-analysis by Byrnes et al. [23]. Similarly, men tend to rule-breaking behavior and delinquency more than women [24]. Further, women were more anxious than men in the general trait anxiety or neuroticism [25]. In terms of personality dimensions, women scored higher than men in neuroticism-anxiety, while men scored higher in sensation seeking [26]. Combining this evidence, we predict that women experience more restriction in their birding behavior, because they may be more anxious about the disease, try to reduce risk of infection, and stick to governmental rules more than men. In general, life experience comes with age, and experienced people are predicted to take fewer risks. From a personality psychology viewpoint, age is related to a higher self-control, emotional and mood stability, and especially to lower sensation seeking [27]. In addition, many countries set movement restrictions and quarantines for older people and different risk groups because COVID-19 seems to affect those more strongly [28]. --- Impact of the COVID-19 Pandemic on Human Behavior During the COVID-19 pandemic, many countries adopted restrictive measures based on physical distancing, to prevent human-to-human virus transmission [29]. This was termed "lockdown". Lockdown is a colloquial term for "mass quarantine", based on "stay-at home" or "shelter-in-place" orders [29]. Behavioral changes, especially, reducing large recreational gatherings, which are considered superspreading events, were one option to reduce the virus spreading [30]. However, quarantine and prolonged home stay may have severe side effects, such as physical inactivity and social isolation [29]. Activity in nature seems to be a protective factor for public health [31]. For example, Pasanen et al. [32] showed that emotional well-being was positively related to physical activity in nature. Still, governmental and organizational information can guide people to avoid infections . Despite the fact that short-time quarantine can slow down or restrict the spread of COVID-19, long-time isolations can cause both economical and mental problems. Currently, global, national, regional, or even local restrictions for travelling have been one of the main actions to prevent the spread of COVID-19 [30]. Additionally, even adjacent countries employed different strategies, such as Finland with strong restrictions and Sweden with only less and mild restrictions. This may have consequences, e.g., on nature-based tourism and recreation activities across multiple spatial scales. For example, at the national level, many countries restricted entry for foreign tourists [33]; and local people were unable to use bird towers and nature centers for their recreation because these sites were closed [34]. People could also change their daily or circadian area use due to COVID-19. For example, they may avoid, e.g., shopping during the rush hours. Many shops have arranged special visiting hours with few customers for elderly and other risk groups. Temporal changes were found in nature-based recreation. For example, in Norway, outdoor recreational activity increased by 291% during lockdown relative to a 3-year average for the same days [35]. Pedestrian activity increased in city parks, peri-urban forest, as well as protected areas [35]. Additionally, many companies started remote working or shortened working times, which may lead to more time at home and more time for birding. --- Hypotheses and Predictions In the context of the pandemic lockdown, we have five main hypotheses on the behavior of birders, based on the theoretical background outlined above. Hypothesis 1. COVID-19 influences where birders hold their activities. Thus, it will influence the spatial area use of birders because of workplace closings, stay at home requirements, as well as international and domestic travel restrictions [2,33]. Many birders travel, sometimes also outside from their home country, to interesting birding sites, and to see rare bird species [14,15,22,36]. Therefore, we predict that birders should travel less, do less long-distance birding trips, and observe birds closer to their homes or even focus on their yards during the pandemic. --- Hypothesis 2. COVID-19 influences when birders hold their activities. Thus, it will change the temporality of birding. In general, people may have more time for leisure, like birding, because they are forced into remote working and/or partial unemployment. We also hypothesized that birders will try to avoid rush hours in preferred birding sites, like bird towers. Basically, birding can be done throughout the day and on a daily basis, but most birders go to watch birds during the weekends when they have more time, and when the birds are most active, i.e., just before or during sunrise [11,37]. --- Hypothesis 3. COVID-19 influences birders' social behaviors. As birding is to some extent a social activity [8,13], we expect that birders should complain about a decrease in group-birding with their friends due to restrictions on gatherings and social distancing rules [29,30]. We also predict that birders will report cancelations of common birding events that decrease their sociality. We predict that older birders change their behavior more than younger ones, because younger people are predicted to take more risks than older people [28] and may stick less to governmental restrictions [27]. We also predict that women will change their birding behavior more than men, because men differ from women in the behavioral component in birding and in competitiveness [8,18,22]. Men are also more risk-taking than women; therefore, men might ignore the restrictions more easily [23][24][25][26]. --- Hypothesis 5. Country of residence influences birders' behavior in the COVID-19 context. Most countries have reacted heavily to the COVID-19 crisis, e.g., by engaging in social distancing and different kinds of closures [38]. Consequently, international tourist arrivals have decreased over a half from January to May 2020 compared to the same period in 2019 [39]. Additionally, birders travel a lot, at least those that take birding seriously, e.g., twitchers. However, countries differ widely in their resources to detect, prevent, and respond to outbreaks [40,41]. We hypothesized that there will be differences in birders' behavior between countries due to COVID-19. We predict that birders living in higher developed countries and individuals having more resources to travel) will be more affected by the COVID-related factors than birders living in developing countries [8,14]. Because countries have different strategies against the COVID-19 pandemic, we predict that birders living in countries with a strong containment and closure policy will be more affected than birders living in countries with a slight policy . --- Material and Methods Most birders are males [13] and about 50 years of age [14]. Our sample fits well to the age and gender bias of previous work. Thus, we consider the sample as representative. Birding activity culminates in northern temperate latitudes in spring, especially when songbirds start singing and migrant birds arrive back from their winter quarters in the southern hemisphere [42]. This period matched with the spread of COVID-19 to those areas. --- Data Collection We started our global questionnaire survey on 30 March 2020 and continued until 2 May 2020. Therefore, we collected data during the most restrictive lockdown measures in most of the countries. A recruitment e-mail was sent over Facebook, Twitter, mailing lists, social media, and websites . The recruitment email letter and the questionnaire were available in English, Arabic, Farsi, Finnish, French, German, Greek, Polish, Portuguese, Spanish, and Italian. We asked to distribute the questionnaire widely. We also contacted 176 birding organizations, websites, or magazines and asked them to publish the recruitment letter and the link to the questionnaire in their online systems like newsletters, Facebook groups or announcements. Eighty-one "closed" Facebook groups were joined and a bird photo was posted with the announcement to support the study . About 2000 e-mails were sent during the recruitment. The recruitment mail was sent to friends and colleagues in many countries , birding e-mail listserv , and to other birders . The questionnaire was hosted on the German SoSciSurvey server to fulfil the European Union's data privacy rules. The questionnaire asked one open-ended question: "Did your birding activity change during the Corona crisis? If so, please note how it changed?", and asked for respondent's age, sex, and country of residence. We used a mixed-methods approach to study birders' reactions to the COVID-19 pandemic. We carried out qualitative content analyses following the methods of Mayring [43,44]. In such an open-ended qualitative analysis, the researcher is open to the responses of the participants. This is complementary to pure hypothesis testing, because some aspects could not have been considered in advance but were also used for the analyses. Afterwards, we used a mixed methods approach to statistically analyze the data [45]. --- Qualitative Content Analysis The following classification categories were used: "Did COVID-19 change your birding activity?": Yes = 1, No = 2, undecided = 3 when two situations were portrayed, e.g., when people live in more than one region or when comparing job versus hobby. The last category was later rejoined with "1 = yes" because some aspects changed, e.g., in the responder's basic living site but not in her/his summer-cottage site. The second category was whether there was a "reason given", coded into 1 = yes and 0 = no reason given. For the responses "no", please see Supplementary Material File S5. The coding of answers in section "yes" is depicted in Table 1. Coding example: "I stay "local, not traveling" more than five miles. I bird in "remote locations" and only "alone". I also bird a lot in my "suburban neighborhood", walking a 1-2 mile loop. I go "early in the morning" before most neighbors are up and about, and those that are do respect physical distancing. A mega rarity was found two days ago in my state and I chose "not to chase" because we are on a stay-at-home order. [ . . . ]". This was coded into: Yes, COVID-19 changed behavior, Yes, reason given, Spatial: closer to home, No group birding, circadian effect: going earlier, no twitching. So, six categories could be coded from this answer. Most of the coded categories are directly related to our study hypotheses, but some only indirectly . For example, reports related to cancelled meetings or increased use of social media can be only indirectly related to changes related to the social changes hypothesis . Therefore, in the coding example Table 1, we highlighted them only as an explanatory part of the study questions. --- Statistical Analysis We used SPSS 26.0 to calculate chi-square tests, Spearman's rank correlation, ANOVAs, and T-tests. We used partial eta-squared as a measure of effect size. Chi-square test was used for the comparison of categories . ANOVAs and T-tests were used to check our hypotheses on gender and age differences . We also used descriptive statistics and Spearman correlation . For the analysis of gender and age, we used only variables with more than n = 200 mentions. Concerning the between-country comparative analysis, we restricted the analysis to countries with at least n = 20 respondents. We included all responses , but for the analysis of gender and age, data were n = 4441 for gender and n = 4466 for age, because some respondents did not give their age. We only compared men and women because of the low sample size of people who indicated their gender as "diverse" . We followed the guidelines of the United Nations [46,47]) to define the sovereignty of a country. We used the human development index for correlations. The HDI data were extracted from the United Nations Human Developmental Report, where countries are ranked according to their development status . HDI integrates economic information and measures of human development to obtain an overall score of human development. A high value indicates a high developed country and low value indicates a less developed country. We used the Oxford Coronavirus Government Response Tracker , hereafter) stringency index as an indicator of the different countries' containment and closure policy. The OxCGRT systematically collects information on several different common policy responses that governments have taken to respond to the pandemic on 17 indicators, such as travel restrictions, stay at home requirements, restriction on gatherings, and public info campaigns. The SI is calculated using the policy indicators C1-C8 and H1 . The value of the index on any given day is the average of nine sub-indices pertaining to the individual policy indicators, each taking a value between 0 and 100. The higher the SI, the greater the containment and closure policy in the specific country. In this article, we refer to "N" as the sample size, whereas "n" refers to the number of cases reported. As this is a qualitative study, n refers only to respondents that named a specific reason. Percentages were calculated in relationship to the total sample size . --- Results A total of N = 4484 responses were received from 97 countries . The mean age was 55.1 years . The mean age of males and females differed significantly, with women being older than men in our sample . In total, 85% of respondents indicated that COVID-19 changed their birding behavior, and only 15% of respondents reported no effect at all. Most participants gave reasons and explanations for why their behavior changed or not. People responding with "no changes" in birding due to the pandemic explained less often why their behavior had not changed . Hereafter, we focus on respondents reporting changes in their birding behavior due to the pandemic . In total, 170 people reported that their birding activity has reduced to zero . In total, 60% of the respondents reported spatial changes of their birding activity . * one person reported even more twitching activity; a twitcher is considered a birder who responds with frenzied activity to news of rarities in his/her region, and will spend money and travel long distances to see a rarity or new bird [16]. Birding became more local, and people focused on their nearer environments and birding hotspots closer to their home. During the pandemic, many people focused on yard birding, which includes different facets, such as feeder watching, watching from a balcony, rooftop, or within the immediate neighborhood . In total, 1.7% of respondents reported that they only watched birds from their windows , which can be considered as the most restricted form of birding. In total, 0.7% of respondents mentioned driving to remote areas for birding and to avoid others, and 0.3% of respondents reported visiting and exploring new places and birding hotspots . --- Temporal Changes in Birding In total, 12% of respondents reported having more time for birding, and 8% reported less time for birding , which were statistically significant . About 20% of the respondents indicated some temporal changes in their birding behavior, including circadian changes . Circadian changes were mostly shifts toward earlier birding times in the morning or later in the evenings , while some avoided "crowded times" or preferred bad weather with rain . --- Social Aspects in Birding Social aspects of the COVID-19 pandemic were reported in a variety of cases . The most important change was the shift to no group birding but also cancellations of field trips, events, and group outings with a bird club were often reported. In addition, carpooling was reduced, as was equipment sharing, while people focused more on keeping distance from others . Concerning social media, 1.3% of respondents explicitly mentioned eBird as a tool that helped structuring the reporting or in finding closer less crowded hotspots. In total, 2.3% of respondents reported more digital web activity, including online courses, Facebook, and listserv amongst others. --- Change of Birding Activities and Content COVID-19 also had a severe impact on monitoring programs and breeding bird surveys. Respondents reported cancellations in monitoring schemes or surveys, or in ringing/banding, and cancelations in bird tours and walks that they were supposed to lead or where they participate . Additionally, the main content of birding activities changed throughout the crisis, with a stronger focus on bird behavior, improving one's identification skills, listening to nocturnal migration, and others , while the chasing of species or the listing became less important. --- Role of Age We assessed whether answer categories differed in age. No difference in age existed in the answer to the general question if COVID-19 had changed birding behavior . However, there were significant age differences in the variables increased solo birding or with spouse, field trips cancelled, and holidays cancelled. In all three variables, participants were older . There was no difference concerning time for birding and in the different categories of spatial change , and the aspect of cancelled surveys and monitoring plans . Table 4. Age comparison of different activities. A response indicating "No" meant that the respondent did not mention this in the open-ended question. "Yes" means it was explicitly mentioned. For a general non-age-stratified analysis, see Tables 1 and2. SD is standard deviation. --- No --- Role of Gender Men reported more often than women that COVID-19 did not change their birding activity . There were no statistically significant differences between men and women in the categories concerning spatial changes , in holidays cancelled , and in surveys cancelled . However, there were significant differences in temporal changes , avoidance of group birding , and in field trips cancelled . --- Between-Country Comparison We found a general difference between countries concerning the percentage of people reporting that COVID-19 changed their birding behavior . The lowest percentages were reported in Czech, Denmark, Finland, Norway, Poland, and Sweden while 96%-100% of the participants reported changes from France, Italy, Mexico, South Africa, Spain, and the UK. Spatial changes in birding behavior also differed between countries . In some countries , a shift toward yard birding occurred, which means a strong reduction in visited places and a restricted spatial scale of birding. While in other countries spatial changes were mostly related to birding closer Percentage of people responding with "Yes" to the question "Did COVID-19 change your birding behavior?" according to country. Country-based N is given in the Supplementary Material File S4. Spatial changes in birding behavior also differed between countries . In some countries , a shift toward yard birding occurred, which means a strong reduction in visited places and a restricted spatial scale of birding. While in other countries spatial changes were mostly related to birding closer to home, e.g., Czech Republic, Denmark, Finland, Norway, Poland, and Sweden. Those were also the countries where people mentioned being less affected in their birding behavior by COVID-19 . There were also differences in changes of temporal activities . In some countries, time spent birding increased, while it decreased in others. However, on average, people reported spending more time for birding than less time for birding. Spatial changes in birding behavior also differed between countries . In some countries , a shift toward yard birding occurred, which means a strong reduction in visited places and a restricted spatial scale of birding. While in other countries spatial changes were mostly related to birding closer to home, e.g., Czech Republic, Denmark, Finland, Norway, Poland, and Sweden. Those were also the countries where people mentioned being less affected in their birding behavior by COVID-19 . There were also differences in changes of temporal activities . In some countries, time spent birding increased, while it decreased in others. However, on average, people reported spending more time for birding than less time for birding. The change toward avoidance of groups differed between the countries . A transformation to avoiding group birding was primarily found in Canada, The change toward avoidance of groups differed between the countries . A transformation to avoiding group birding was primarily found in Canada, Denmark, Finland, Germany, the Netherlands, Sweden, and the US. In a correlational approach, we related the answers with the HDI . Higher HDI, and thus higher development, was positively correlated with spending more time birding, more group avoidance, more fieldtrips cancelled, more holidays cancelled, and with birding closer to home . Similar results were obtained when the SI was used. --- Discussion This study focused on an outdoor leisure activity and the relationship to the pandemic. There seem to be no studies about this topic yet, and we believe that the results are somewhat generalizable to other nature-related outdoor-leisure activities, such as angling/fishing or hunting. We found a strong dynamic in behavioral responses, i.e., spatial, temporal, circadian, and social effects, toward --- Discussion This study focused on an outdoor leisure activity and the relationship to the pandemic. There seem to be no studies about this topic yet, and we believe that the results are somewhat generalizable to other nature-related outdoor-leisure activities, such as angling/fishing or hunting. We found a strong dynamic in behavioral responses, i.e., spatial, temporal, circadian, and social effects, toward birding activities during the COVID-19 pandemic. These effects also differed between countries. Our results indicated that COVID-19 influences the behavior of birders-either directly or indirectly via containment and closure policy. People reported more often that their birding behavior was affected by COVID-19. This was an expected result, because previous work showed a significant influence of the pandemic and governmental measures on human behavior [2][3][4]30,35]. It is interesting that people also reasoned why their birding behavior had not changed. In some cases, it is intuitive, because birders avoiding group birding may not experience the social distancing as a restriction. Additionally, people living in the countryside with a good patch for birdwatching may not experience travel restrictions because they did not travel previously. Hypothesis 1 could be only partially confirmed. Spatial effects were found in most countries with increasing birding within yards or near surroundings. Obvious reasons were the governmental orders, travel restrictions, and stay at home orders or guidelines for elderly because COVID-19 can be harmful especially for them [49]. Our results also indicated that birders try to find new, not overcrowded birding sites. By doing so, possibly new high-quality birding sites can be found. The changes in birding behavior due to the pandemic resulted in an increase in the utilization of green spaces in cities [35]. Hypothesis 2 could be only partially confirmed. Temporal effects are two-fold, some people pursue birding activities more , others less . Some responders indicated that they go birding more often on weekdays during the COVID-19 pandemic than before, probably because they now have more time during the week and want to avoid weekend rushes in the most popular birding sites. If this is true in a larger content, the COVID-19 pandemic will increase the data quality of citizen science-based bird monitoring projects by decreasing the weekend bias of the data quality and quantity . However, some Finnish respondents stated that due to the local travel restrictions, they were unable to conduct their voluntary bird monitoring duties. We also found some circadian shifts in birding behavior. Some birders reported going birding earlier than before COVID-19. By doing so, they tried to avoid rush hours, because otherwise they must queue to be able to visit a bird tower. The change in circadian rhythms can also influence the data collection, amount, and quality since bird activity is usually high at sunrise , and this may lead to an accurate assessment of species richness and breeding pair numbers. However, more data are needed to generalize the effects of COVID-19 for birders' circadian behaviors. Hypothesis 3 dealt with the social behaviors of birders. Many birders reported that public bird happenings were cancelled due to the pandemic. In addition, birders preferred to go birding with their friends within a small group. Due to COVID-19, they did not share their car with others other than their own family members. Our results clearly indicated that solo birding or birding with spouse increased heavily during the COVID-19 pandemic. Birding is a social activity. Gaining respect from other birders, building friendships, and meeting people that share the same interests are important motivational factors for birding [13]. Therefore, the COVID-19 pandemic may have a negative influence on the well-being and social interactions of birders. Concerning hypotheses 4 , COVID-19 was related to age and gender. Older people seemed to be more affected than younger ones by decreased group birding, cancelled field trips, and holidays too. We can only speculate about this, but older people may feel less safe when birding alone and are more at risk of infections [28]. Women more than men seem to be birders that need social contact and suffer from social distancing because they more often complained about cancellations of field trips or organizational meetings. Further, they mentioned increased group avoidance during the COVID-19 pandemic more often. In line with this, when analyzing the initial involvement in birding, women more often started birding by participating in an organized walk or a birding club activity [16,22], suggesting that the social aspects of birding may have a stronger influence on women, which matches our results concerning the affection of cancelled trips and club activities. With regard to hypothesis 5, we found a strong correlation between the HDI and our dependent variables. This indicates that the experienced changes in lockdown are related to economic development. Cancelled trips/holidays were mentioned more often by people from countries with a higher HDI [16,17]. Additionally, in higher developed countries, spatial travel restrictions were mentioned more often by the respondents. This may be the case because travelling to farther birding hotspots is more common than in countries with a lower HDI. In those countries, birding trips are more likely to take place nearby [52]. Concerning temporal changes, people in countries with a high HDI experienced the effect of having more time, because their work was probably shifted toward remote working at home. Field trips with birding clubs and group birding seem to be more affected in countries with high HDI. These may be more common in those countries, and therefore, the loss of those activities can be experienced more [33]. The results concerning the stringency index were similar, with a higher stringency showing a stronger response. One limitation of our work is that it was a snapshot short-questionnaire study. It was intended to address a high number of respondents. In general, our study population represents the birding community well. Most respondents were middle-aged men. Other variables should have been added, such as the current work, living habitat , years of carrying out the birding activity, or some recreation specialization questions. However, recreational specialization levels of birders and demographic factors might be interrelated. Our results indicated that women reported more often than men about cancellations of society meetings. This can also be related to gender differences in the specialization level. However, a pandemic is an unpredictable phenomenon. Thus, it is crucial for research to react quickly and start surveys soon, because people do not remember facts correctly afterwards, which is known as hindsight bias in psychology [53]. We, therefore, believe that our study is valuable in terms of data collection during the pandemic. In addition, we used Excel and SPSS directly for coding the answers, and not another software, for qualitative analyses. This could be viewed as another limitation. --- Implications There are many implications of this study. First, it shows that governmental decisions are always a trade-off between the highest possible protection from an infection and keeping life ongoing. Long lockdowns with strict regulations may severely impact leisure activities that are part of human life. This will influence, e.g., the physical and mental health of people. Additionally, social distancing or birding alone was mentioned often, indicating a negative influence on human social life and well-being. In practice, birders and other leisure activists can change their circadian pattern to avoid crowding temporally. Concerning spatial changes, sites near home may experience a higher load of visitors, which can have management implications. Further, a temporal and spatial shift in birding may influence the data quality in citizen science projects. --- Conclusions To our knowledge, this is the first study where interactions between large-scale pandemic and behavioral changes in a nature-based leisure activity were assessed. We conclude that the COVID-19 pandemic has severe impacts on birding content, birders' behavior, and social interactions as well as their contribution to citizen science projects. Nature-based recreation will be directed more toward nearby sites in the neighborhood; therefore, environmental management resources and actions need to be directed to sites that are located near the users, e.g., in urban and suburban areas. Author Contributions: C.R. and P.T. designed the general study, all authors were involved in developing the study design, C.R., P.T., J.J., N.S. collected the data, C.R. made the analyses, all authors contributed to the writing of the manuscript and to reviewing the final draft. All authors have read and agreed to the published version of the manuscript. --- --- Data Availability: The data that support the findings of this study are available from the corresponding author upon reasonable request.
The new corona virus infection SARS-CoV2 which was later renamed COVID-19 is a pandemic affecting public health. The fear and the constraints imposed to control the pandemic may correspondingly influence leisure activities, such as birding, which is the practice of observing birds based on visual and acoustic cues. Birders are people who carry out birding observations around the globe and contribute to the massive data collection in citizen science projects. Contrasting to earlier COVID-19 studies, which have concentrated on clinical, pathological, and virological topics, this study focused on the behavioral changes of birders. A total of 4484 questionnaire survey responses from 97 countries were received. The questionnaire had an open-ended style. About 85% of respondents reported that COVID-19 has changed their birding behavior. The most significant change in birdwatchers' behavior was related to the geographic coverage of birding activities, which became more local. People focused mostly on yard birding. In total, 12% of respondents (n = 542 cases) reported having more time for birding, whereas 8% (n = 356 cases) reported having less time for birding. Social interactions decreased since respondents, especially older people, changed their birding behavior toward birding alone or with their spouse. Women reported more often than men that they changed to birding alone or with their spouse, and women also reported more often about canceled fieldtrips or society meetings. Respondents from higher developed countries reported that they spend currently more time for birding, especially for birding alone or with their spouse, and birding at local hotspots. Our study suggests that long lockdowns with strict regulations may severely impact on leisure activities. In addition, a temporal and spatial shift in birding due to the pandemic may influence data quality in citizen science projects. As nature-based recreation will be directed more toward nearby sites, environmental management resources and actions need to be directed to sites that are located near the users, e.g., in urban and suburban areas. The results can be applied with caution to other nature-based recreational activities.
Findings: The results revealed non-native productions of English stops by the first-generation migrants but largely target-like patterns by the remaining sets of participants. The Sylheti stops exhibited incremental changes across successive generations of speakers, with the third-generation children's productions showing the greatest influence from English. Originality: This is one of few studies to examine both the host and heritage language in an ethnic minority setting, and the first to demonstrate substantial differences in heritage language accent between age-matched second-and third-generation children. The study shows that current theories of bilingual speech learning do not go far enough in explaining how speech develops in heritage language settings. --- Implications: These findings have important implications for the maintenance, transmission and long-term survival of heritage languages, and show that investigations need to go beyond second-generation speakers, in particular in communities that do not see a steady influx of new migrants. --- Introduction A growing body of research has shown that individuals raised in an ethnic minority setting develop different pronunciation patterns from the generation of their foreign-born parents . Accordingly, second-generation heritage speakers commonly exhibit non-native features in the heritage language, and therefore tend to be perceived as foreign-accented in it . At the same time, they usually behave much like their monolingual peers in the host language , although this is not always the case , and heritage-language markers, such as retroflex realisations of English /t/ , may be retained to fulfil socio-indexical functions. In contrast, first-generation migrants, in particular those who arrived in the host country as adults, tend to have a distinct foreign accent in their L2 , whilst retaining a relatively authentic accent in their native language . Little is known, however, about the speech development of subsequent generations of heritage language users . Do second-and third-generation children in language minority settings differ from each other in their pronunciation of the heritage language and the host language? If so, how do the differences manifest, and can they be explained on the basis of their parents' production patterns? The present study aims to address these questions by investigating the stop consonant productions of two sets of Bangladeshi heritage families: first-generation female migrants from the Sylhet area of Bangladesh and their UK-born children, and second-generation UK-born female Sylheti heritage language users and their children. In so doing, it aims to disentangle the effects of cross-linguistic, developmental and sociocultural factors. --- Background There is a general consensus that the earlier one starts to learn a second language, the less foreign-accented it will be . Accordingly, individuals acquiring an L2 in adolescence or adulthood virtually always end up with some degree of foreign accent for an overview), while this is much less likely in those with early exposure to the language . A number of explanations have been given for these findings. Some have argued for a maturationally-defined critical period ). However, contrary to these claims, there is evidence that native-like proficiency, while rare, is not impossible for late L2 learners . Moreover, the correlation between age of onset of learning and degree of foreign accent is linear without any marked discontinuities . This has led many to abandon maturation-based accounts, and instead to explain age effects on the basis of extra-linguistic factors, such as L1 and L2 usage patterns . Generally, heritage language users are at an advantage over L2 learners in terms of the accuracy of their pronunciation patterns . Chang et al. , for example, showed that Mandarin heritage speakers in the United States consistently produced greater contrastivity in cross-linguistically similar back vowels, stops and fricatives than native American L2 learners of Mandarin. Similarly, Kupisch et al. demonstrated that heritage language speakers were perceived to be significantly less foreign-accented in their minority language than L2 learners, although their accent in the majority language was more native-like than in the minority language. These patterns have been explained on the basis of differences in linguistic experience. While the L1 sound system of late L2 learners is fully in place when L2 learning starts, heritage language children usually have experience with the minority language from birth, or shortly thereafter, and the majority language by the time they start compulsory education. They are often initially dominant in the minority language, in particular if the language is also widely used in the community. However, with the onset of mainstream education in the majority language, there is typically a shift in dominance, with the use of the minority language frequently becoming more restricted . McCarthy and McCarthy et al. , for example, showed that Bangladeshi heritage children's perception and production of English /p b k g/ was heavily influenced by Sylheti during their first year in an English-speaking nursery, but was much more like that of their monolingual peers a year later. Nevertheless, early language exposure does not guarantee native-like accents in the heritage language . For example, Oh et al. showed that childhood speakers of Korean who had stopped using the heritage language upon school entry were foreign-accented in it. Similarly, Kupisch et al. reported that heritage language users in Germany, France and Italy with exposure to both languages from birth were rated as foreign-accented in the minority language. These patterns are also reflected in studies examining speech production. McCarthy et al. , for example, showed that second-generation London Bengalis produced non-native VOT patterns in their minority language, and Nagy & Kochetov revealed incremental changes in the heritage language VOT patterns of successive generations of Russian and Ukrainian speakers in Toronto in the direction of English . Interestingly, Italian heritage speakers in the study did not show this pattern. The authors speculate that these differences may be a result of the greater community support offered to ethnic Italians than Russians and Ukrainians in Toronto, including dedicated language classes. In contrast to the minority language, heritage language users are usually native-like in the host language. For example, Kupisch et al. found no difference in an accentedness rating between their monolingual speakers and the heritage language speakers in the host language. Likewise, the Gujurati heritage speakers in Evans et al., produced their English vowels much like their monolingual English peers, and the second-generation Bengali heritage speakers in McCarthy et al. did not differ from monolingual controls in their production of English vowels and VOT. Nevertheless, the host language is not always immune to non-native patterns. For example, in Darcy & Krüger's study, 10-year-old Turkish heritage children living in Germany whose first exposure to German was between 2 and 4 years of age were less accurate in the perception of some German vowel contrasts than monolingual German-speaking children. Similarly, Stangen et al. found highly variable patterns in their study on global foreign accent in Turkish-German heritage language users from Germany: the majority were perceived to have a foreign accent in either the host language or the heritage language , while some were foreign-accented in neither language , and others in both . Where non-native forms occur in heritage language speakers, they may be a result of inadvertent cross-linguistic interactions. According to the Speech Learning Model , this happens when cross-linguistically similar L1 and L2 sounds are perceptually equated with each other, a phenomenon termed equivalence classification. One of the best-known examples of this phenomenon is the difficulty that Japanese learners face with the perception and production of English /l/ and /r/, which they tend to assimilate to their single Japanese category /r/ . Alternatively, where bilinguals are able to perceive a difference between L1 and L2 categories, they may strive to increase cross-linguistic distinctiveness. For example, the early Italian-English bilinguals in Flege et al.'s study produced English /eɪ/ with exaggerated vowel-inherent spectral change to keep it maximally distinct from monophthongal Italian /e/. Both mechanisms may lead to patterns that differ from those produced by monolingual speakers. According to the SLM, the likelihood that crosslinguistically similar sounds are distinguished is greater in early than late bilinguals since the L1 sound system is less established in younger learners, and hence more amenable to reorganisation. This may explain why heritage language speakers tend to outperform L2 learners. Similar explanations are offered by other theories of L2 speech learning. The Perceptual Assimilation Model , for instance, predicts difficulties in L2 perception on the basis of the assimilability of non-native contrasts to native categories. In addition to cross-linguistic interactions, non-native forms in heritage language settings may arise from socio-cultural factors and form part of contact varieties . Kirkham , for example, argued that British Asians from Sheffield used retroflex realisations of English /t/ to signal their Asian identity, rather than inadvertently as a result of cross-linguistic interactions, since the use of these forms could not be predicted on the basis of their language use patterns, with even monolingual English speakers from the community using them. Sharma & Sankaran , in turn, examined the acquisition of a native feature, /t/ glottaling, and a non-native feature, /t/ retroflexion, in British Asians from London. They found that younger second-generation speakers used /t/ retroflexion in English to signal their Asian identity, while older second-generation speakers followed first-generation speakers' non-native use of /t/ retroflexion, but unlike them, used /t/ glottaling natively. The authors argue that these patterns are consistent with a socially oriented model that allows for incremental changes to take place, rather than a cognitively oriented one which claims that non-native forms are either innately blocked by an accent filter , or reused by native speakers to mark their identity. --- The present study This study investigated stop consonant production in Sylheti-English bilingual children and adults from Bangladeshi heritage backgrounds in Cardiff, South Wales, and as such is the first to examine the speech of ethnic minorities in Wales. Compared with the London Bengali communities in Tower Hamlets, where 30% of the population are of Bangladeshi origin, and Camden, where they constitute the largest minority ethnic group , the Bengali community in Cardiff is relatively small. In the 2011 Census, some 0.3% of the population of Wales considered themselves British Bangladeshis, with 5207 individuals indicating Sylheti as their main home language . About half of these live in Cardiff, in particular in the areas of Riverside and Grangetown. These communities have a close-knit social structure, including shops, restaurants and community centres, but, unlike those in Tower Hamlets and Camden, do not witness a steady influx of new arrivals from Bangladesh. Of the approximately 500,000 British Bengalis, some 95% originate from the rural area of Sylhet in north-eastern Bangladesh , where Sylheti is spoken. Sylheti is typologically related to Standard Bengali , but the two languages are not mutually intelligible . While native speakers of Sylheti, including first-generation migrants, are largely competent in SB, the language of education, this is not the case for most UK-born heritage speakers . On the whole, Sylheti has a less complex phonological system than SB, with fewer consonant and vowel categories . Hence, while SB contains sixteen stop categories that systematically contrast in voicing and breathiness 1 , Sylheti only contains nine ). These include the voiced breathy stops / bʱ/ and /gʱ/, the voiced nonbreathy stops /b d̪ ɖ g/ and the voiceless stops /t̪ ʈ k/. A small number of acoustic studies have been carried out on Sylheti stops. Gope and Mahanta examined voiced stop productions with and without underlying breathiness by adult native speakers of Sylheti from India. They found no differences in VOT as a function of breathiness, with all categories realised with a voicing lead. McCarthy et al. revealed similar patterns for first-generation Bangladeshis who arrived in the UK in their late teens or in adulthood, and for their native Sylheti control speakers; in contrast, early arrivals and second-generation heritage speakers produced Sylheti voiced stops with significantly longer VOT values and less prevoicing, and hence more English-like. Voiceless stops, in turn, were produced within the short-lag range by all speaker groups . This contrasts with other varieties, such as Dhaka Bengali, where /ʈʰ/ and /t̪ ʰ/ are realised with long-lag VOT values, with mean values between 50 and 100 ms . No previous work on children's acquisition of Sylheti stops is available. Studies from other languages suggest that monolingual children acquire the stop voicing contrast earlier in languages that distinguish short-lag and long-lag VOT categories, like English, than in languages with a distinction between lead voice and short-lag VOTs, like Sylheti . Indeed, the acquisition of lead voicing seems to be a particularly protracted process, perhaps due to its aerodynamic challenges , with children as old as 7;0 struggling to use it consistently . Studies on bilingual and multilingual children, in turn, have shown cross-linguistic interactions . For example, the Dutch-English bilingual child studied by Simon realised Dutch /p/ and /t/ with longlag VOT values, instead of target short-lag ones, after extensive exposure to English. Similarly, Heselwood & McChrystal's study revealed greater use of prevoicing in English voiced stops produced by Punjabi-English bilingual children than by their monolingual English peers. The only study to examine stop consonants in Bangladeshi heritage children from a Sylheti-speaking community is McCarthy et al. . This study revealed changes during the first year of school in the children's production and perception of English /p b k g/ in the direction of their monolingual peers' patterns. No data on the children's Sylheti stops were collected, however. The purpose of the present study was to extend existing work on the speech of Bangladeshi heritage speakers in the UK by investigating the production of stop consonants in Sylheti and English by second-and third-generation children and their mothers. --- Method --- --- Materials This study aimed to assess all stop consonant categories that occur word-initially in Sylheti and English. Table 2 depicts the materials used in the study. They include monosyllabic and bisyllabic words starting with a singleton bilabial, coronal or velar stop in the onset. Words were chosen with which young children and adults were expected to be familiar, and which could be elicited via pictorial representation. The English dataset comprised the categories /p b t d k g/, the Sylheti dataset the categories /p pʰ b bʱ t̪ ʈ d̪ ɖ k kʰ g gʱ/. Note that the latter included three categories that previous research had shown to be realised as fricatives by Sylheti speakers, but that historically constitute stops, i.e. /p/, /pʰ/ and /kʰ/ . --- Sylheti English --- Procedure Data collection took place in a quiet room in the participants' homes. Each participant was recorded twice, once in a Sylheti session, and once in an English one, with the two sessions separated by several days. This procedure was adopted to minimise the likelihood of dual language activation . Recordings were made using a Zoom H2 Handy Recorder with integrated condenser microphone, which was positioned a few centimetres from the participant's mouth . Each session commenced with a brief conversation in the target language with the experimenter, a UKborn Sylheti-English bilingual. This was followed by a picture-naming task that aimed to elicit three instances of each target word produced at a natural pace in a carrier phrase ). This procedure yielded 3 x 6 = 18 tokens of the English stops and 3 x 12 = 36 tokens of the Sylheti stops from each participant, giving a total of 1674 tokens. No formal assessment of the children's lexical knowledge was carried out, but almost all items could be elicited spontaneously. In the few instances where this was not possible, semantic prompts were given, and if these were unsuccessful, the target words were modelled by the experimenter. No attempts were made to elicit stop consonants in isolation. --- Analysis Many studies have examined stop consonants acoustically . While temporal measures, such as voice onset time , allow for direct comparisons between child and adult participants, this is not the case for spectral measures that aim to assess differences in place of articulation as they vary with vocal tract size. Moreover, the relation between acoustic properties for place of articulation and breathiness, and their articulatory and perceptual correlates is complex. For example, differences in spectral shape may be due, not to differences in place of articulation, but variations in the degree of damping of the active articulator . For these reasons, we opted for a two-way approach in the present study. First, the materials were analysed auditorily. This involved all target words being transcribed in broad phonetic transcription by a phonetically-trained Sylheti-English bilingual, using the symbols of the International Phonetic Association . This analysis focused on establishing the place of articulation of each stop production. This was particularly critical for coronal stops. In addition, it assessed the presence or absence of breathiness in voiced stops. Only tokens that conformed to the reported adult forms in native Sylheti and English stops were classified as target-like 2 . As a measure of reliability, the entire dataset was independently reanalysed by a second phonetically-trained researcher with no prior knowledge of Sylheti or related languages. Cohen's κ was run to determine if there was agreement between the two sets of transcriptions. The results revealed substantial agreement , p < .0005), based on Landis & Koch's classification. Any differences in the two sets of transcriptions were resolved by consensus. Uncertainty remained on one token of English /g/ and one token each of Sylheti /bʱ/, /t̪ /, /ɖ/and /g/. These tokens were removed from further analysis. To assess voicing, we analysed the participants' VOT patterns acoustically, using PRAAT software . Measurements were taken from the release burst of each token, signalled by a sharp peak in waveform energy, to the onset of voicing of the following vowel, as marked by the zero crossing of the first glottal pulse for modal voicing . Tokens that displayed more than one transient were measured from the first visible release burst. If voicing occurred during the closure period, VOT was measured from the point at which vocal fold vibration could be discerned in the waveform, together with aperiodic wide-band energy in the spectrograms, up to the first release burst (cf. Figure 1c). The onset of lead voicing was established visually. Tokens where this could not be determined clearly were excluded from the VOT analysis, as were tokens without a visible release burst. In total, 28 Sylheti tokens and 22 English tokens were excluded. --- Results In line with previous studies , we found that Sylheti /p/ and /pʰ/ were realised as [f], and Sylheti /kʰ/ as [x] in virtually all instances. As a result, these categories were not analysed further. All other categories in Sylheti and English were realised as stops. In what follows, the results are organised in three parts according to place of articulation . Each part is further divided by language, first presenting intra-linguistic comparisons for Sylheti and English stops and then a crosslinguistic comparison. The auditory and acoustic results are integrated within each section. To determine differences between the groups and stops, linear mixed-effects models were run separately in R for the auditory and acoustic data, and for each place of articulation, using all analysed tokens. In each model, stop and group were entered as fixed factors and speaker as a random factor with random slopes for stop. Note that stop and group were coded around zero. This made it possible to interpret the fixed factors as main effects. Using the LmerTest function in R , degrees of freedom were obtained via the Satterthwaite approximation with which p-values could be generated. --- Bilabial stops --- Sylheti Figure 2a depicts the percentage of bilabial Sylheti stops that were produced accurately in terms of place of articulation and breathiness, as assessed in the auditory analysis. The results show that the GEN 1 MUMS managed to produce /b/ and /bʱ/ entirely accurately in terms of these dimensions, while the GEN 3 CHILDREN had the lowest accuracy score overall, with all non-target-like tokens of /bʱ/ realised as [b], lacking breathiness . To determine whether the between-group differences are significant, a linear mixedeffects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 4, revealed no significant main effect of stop, but a significant main effect of group and a significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for /b/ and /bʱ/, with group as fixed factor and speaker as random factor. The α-level was adjusted to .025, using the Holm-Bonferroni method . Only one of the models revealed a significant effect of group, with the GEN 2 CHILDREN outperforming the GEN 3 CHILDREN on /bʱ/ . To determine whether the between-group differences in VOT are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 4, revealed no significant main effect of stop, and no significant group*stop interaction. --- [b] [bʱ] [t] [t̪ ] [ʈ] [d] [d̪ ] [ɖ] [k] [g] [gʱ] [ɣ] /b/ However, it did find a significant main effect of group. To examine this effect further, we compared each of the groups with each other in separate regression models, run separately for /b/ and /bʱ/, with group as fixed factor and speaker as random factor. The α-level was adjusted to .008. The results revealed significantly longer VOT values for the GEN 3 CHILDREN on both Sylheti stops than the GEN 1 MUMS , and the GEN 2 MUMS . No differences were observed between the adult participants and the GEN 2 CHILDREN. --- English The auditory analysis revealed that English /b/ and /p/ were consistently produced at the correct place of articulation. Moreover, there were no breathy tokens of English /b/ . However, an analysis of the participants' VOT patterns showed differences in voicing across the groups . Accordingly, the GEN 1 MUMS prevoiced 73% of their English /b/ productions, while the GEN 2 MUMS only prevoiced 35% and the GEN 2 CHILDREN only 18% . As in Sylheti, the GEN 3 CHILDREN produced no prevoiced tokens at all, instead realising English /b/ with short-lag VOT values throughout. Figure 3: VOT distributions for English /p/ and /b/ . [ To determine whether the between-group differences are significant, a linear mixedeffects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 4, revealed significant main effects of group and stop, but no significant group*stop interaction. p] [b] [t] [t̪ ] [d] [d̪ ] [k] [g] [j] /p To examine this effect further, we compared each of the groups with each other in separate regression models, run separately for /p/ and /b/, with group as fixed factor and speaker as random factor. The α-level was adjusted to .01. The results revealed significantly longer VOT values for the GEN 3 CHILDREN on English /b/ than the GEN 1 MUMS . No other between-group differences were significant. --- Cross-linguistic comparison To determine whether the participants produced bilabial stops differently in Sylheti and English, two linear mixed-effects models were run, one on the percent correct scores, and one on VOT. Both models had stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 4, revealed significant main effects of group for both models, as well as a significant effect of stop for the VOT model, and a significant group*stop interaction for the percent correct model. To examine these effects further, we compared each combination of stops across the two languages for each group in separate regression models with stop as fixed factor and speaker as random factor. The α-level was adjusted to .017 for the percent correct scores and .006 for VOT. The results on the percent correct scores revealed significantly greater accuracy on English /b/ than Syheti /b/ for the GEN 2 CHILDREN , and significantly greater accuracy on English /b/ than Sylheti /bʱ/ for the GEN 3 CHILDREN . Moreover, with respect to VOT all four groups exhibited significantly longer VOT values for English /p/ than Sylheti /b/ and /bʱ/ . There were no significant differences in VOT between Sylheti /b/ and /bʱ/, and English /b/. --- Coronal stops --- Sylheti Figure 4a shows the percentage with which the Sylheti coronal stops /t̪ ʈ d̪ ɖ/ were produced at the correct place of articulation. The GEN 1 MUMS were the most accurate while performance by the other groups was variable, resulting in lower accuracy rates, in particular for the GEN 3 CHILDREN. The majority of errors involved realising dental and retroflex stops as alveolars . However, confusion between retroflex and dental categories was also common, accounting for 16% of errors overall . See Table 3 for further details. To determine whether the between-group differences on Sylheti coronal stops are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 6, revealed a significant main effect of group, but no significant main effect of stop and no significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for each of the stops, with group as fixed factor and speaker as random factor. The α-level was adjusted to .01. The results revealed significantly greater accuracy on /t̪ / for the GEN 1 MUMS than the GEN 3 CHILDREN . No other between-group differences reached significance. suggest consistent prevoicing in voiced stops. The two groups of adults in the present study broadly followed this pattern, with the GEN 1 MUMS prevoicing 66% of their voiced coronal stops, and the GEN 2 MUMS 61% . In contrast, the GEN 2 CHILDREN only prevoiced 16% of their /d̪ / and /ɖ/ productions, and the GEN 3 CHILDREN fewer than 2% . The voiceless coronal stops, in turn, were realised within the long-lag VOT range by all groups. To determine whether the between-group differences in VOT for Sylheti coronal stops are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 6, revealed significant main effects of group and stop, but no significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for each of the stops, with group as fixed factor and speaker as random factor. The α-level was adjusted to .004. The results showed that the GEN 3 CHILDREN produced /t̪ /, /d̪ /, and /ɖ/ with significantly longer VOT values than the GEN 1 MUMS , and /d̪ / with significantly longer VOT values than the GEN 2 MUMS . The GEN 2 CHILDREN produced Sylheti /ɖ/ with significantly longer VOT values than the GEN 1 MUMS . --- English Figure 5a shows the percentage of correct productions of English /t/ and /d/. Inspection of the figure shows that the child participants and the GEN 2 MUMS exhibited high degrees of accuracy on these categories. In contrast, the GEN 1 MUMS largely produced them inaccurately. An examination of their error patterns revealed that all non-target like tokens of /t/ were realised as [t̪ ] and all non-target like tokens of /d/ as [d̪ ] . These differences were tested in a linear mixed-effects model with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 6, revealed a significant main effect of group, but no significant main effect of stop and no significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for English /t/ and /d/, with group as fixed factor and speaker as random factor. The α-level was adjusted to .004. The results revealed that the GEN 1 MUMS were significantly less accurate on English /t/ and /d/ than the GEN 2 CHILDREN, the GEN 2 MUMS and the GEN 3 CHILDREN . Figure 5b depicts the VOT patterns for English /t/ and /d/ across the groups. Inspection of the figure shows similar patterns for English /t/, with realisations in the longlag VOT range throughout. In contrast, the VOT patterns for English /d/ show stark differences across the groups: the GEN 1 MUMS mainly realised this category with a voicing lead ; in contrast, the GEN 2 MUMS only exhibited prevoicing in 35% of instances and the GEN 2 CHILDREN in 7% of instances , while the GEN 3 CHILDREN did not prevoice any of their English /d/ tokens. To determine whether these differences are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 6, revealed significant main effects of group and stop, but no significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for each of the stops, with group as fixed factor and speaker as random factor. The α-level was adjusted to .01. The results showed that the GEN 3 CHILDREN produced English /d/ with significantly longer VOT values than the GEN 1 MUMS and the GEN 2 MUMS . The GEN 2 CHILDREN also produced /d/ with significantly longer VOT values than the GEN 1 MUMS . No significant differences were observed for /t/. --- Cross-linguistic comparison To determine whether the participants produced coronal stops differently in Sylheti and English, two linear mixed-effects models were run, one on the percent correct scores, and one on VOT. Both models had stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 6, revealed a significant main effect of stop and a significant group*stop interaction for the percent correct model, as well as significant main effects of group and stop for the VOT model. To examine these effects further, we compared each combination of stops across the two languages in separate regression models, run separately for each group, with stop as fixed factor and speaker as random factor. The α-level for the percent correct scores was adjusted to .002, and for VOT to .01. The results showed that the GEN 1 MUMS had significantly higher percent correct scores on Sylheti /t̪ /, /ʈ/, /d̪ / and /ɖ/ than English /t/ and /d/, while the GEN 2 CHILDREN, the GEN 2 MUMS, the GEN 3 CHILDREN showed the reverse pattern with significantly higher percent scores for English coronal stops than Sylheti ones . There were only two exceptions to this pattern: the GEN 2 CHILDREN did not differ significantly in their accuracy of Sylheti /ʈ/ and English /d/ , and of Sylheti /d̪ / and English /d/ . The results for VOT, in turn, showed that the GEN 2 CHILDREN had significantly longer VOT values on English /t/ than Sylheti /t̪ / and /ʈ/ . Similarly, the GEN 2 MUMS had significantly longer VOT values on English /t/ than Sylheti /t̪ / and /ʈ/ . No other cross-linguistic differences were observed. and speaker as a random factor with random slopes for stop. The results, displayed in Table 7, revealed significant main effects of group and stop and a significant group*stop interaction. However, further regression models, run separately for each stop with group as fixed factor and speaker as random factor, and an adjusted α-level of .025, revealed no significant between-group differences. shows similar values for /k/ across the groups, but differences in the degree of prevoicing in /g/ and /gʱ/. The GEN 1 MUMS realised 73% of their voiced velar stops with a voicing lead and the GEN 2 MUMS 52% . In contrast, the GEN 2 CHILDREN only prevoiced 18% of their voiced velar stops , and the GEN 3 CHILDREN fewer than 2% , instead realising Sylheti /g/ and /gʱ/ within the short-lag VOT range. --- Velar stops --- Sylheti --- Model To determine whether the between-group differences in VOT for Sylheti velar stops are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 7, revealed significant main effects of group and stop, but no significant group*stop interaction. To examine these results further, we compared each of the groups with each other in separate regression models, run separately for each of the stops, with group as fixed factor and speaker as random factor. The α-level was adjusted to .005. The results showed that the GEN 3 CHILDREN produced Sylheti /g/ and /gʱ/ with significantly longer VOT values than the GEN 1 MUMS , and the GEN 2 with significantly longer VOT values than the GEN 1 MUMS . --- English All tokens of English /k/ were produced at the correct place of articulation, and only four tokens of English /g/ were not target-like. Moreover, there were no breathy tokens of English /g/ . However, an analysis of the participants' VOT patterns showed differences in voicing across the groups . The adult participants exhibited substantial prevoicing of English /g/ , while the child participants produced all their tokens within the short-lag VOT range. English /k/, on the other hand, was consistently produced within the long-lag VOT range by all groups. To determine whether the VOT differences are significant, a linear mixed-effects model was run with stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 7, revealed a significant main effect of stop, but no significant main effect of group and no significant group*stop interaction, suggesting that the English velar stops were produced in much the same way by each of the groups. --- Cross-linguistic analysis To determine whether the participants produced velar stops differently in Sylheti and English, two linear mixed-effects models were run, one on the percent correct scores, and one on VOT. Both models had stop and group as fixed factors and speaker as a random factor with random slopes for stop. The results, displayed in Table 7, revealed a significant main effect of stop for the percent correct and VOT models, and a significant main effect of group for the VOT model. To examine these effects further, we compared each combination of stops across the two languages in separate regression models, run separately for each group, with stop as fixed factor and speaker as random factor. The α-level was adjusted to .025 for the percent correct scores and .007 for VOT. The results for the percent correct model revealed significantly greater accuracy on English /g/ than Syheti /gʱ/ for the GEN 2 MUMS and the GEN 3 CHILDREN . With respect to VOT, all four groups produced English /k/ with significantly longer VOT values than Sylheti /k/ . Moreover, the GEN 2 CHILDREN produced English /g/ with significantly longer VOT values than Sylheti /g/ . --- Discussion The purpose of this study was to gain a better understanding of cross-generational transmission in heritage language settings. To this end, we examined the Sylheti and English stop consonant productions of two sets of Bangladeshi heritage families: first-generation migrants from the Sylhet area of Bangladesh who arrived in the UK in adulthood, and their UK-born children, and second-generation UK-born adults and their children. The results revealed significant differences in both the host language and the heritage language across the generations, and between the child and adult participants. In what follows, the adult and child participants' acquisition patterns will be discussed, followed by an examination of socio-cultural factors. Finally, we will consider the implications of our findings for the maintenance and transmission of heritage languages. --- Acquisition patterns: adults To begin with, an investigation of the GEN 1 MUMS' L2 English stops revealed a number of non-native patterns. For example, English /d/ was commonly realised as [d̪ ] and English /t/ as [t̪ ]. The GEN 1 MUMS also predominantly produced /b d g/ with a voicing lead, rather than with short-lag VOT values. While the use of lead voicing in English is not non-native per se, its occurrence tends to be marginal. In Docherty , for example, it accounted for 7 % of voiced stops. In contrast, the GEN 1 MUMS produced 27/ 40 tokens, i.e. 68% with a voicing lead. These patterns conform to those found in McCarthy et al.'s study and suggest an influence of the participants' native languagea finding that is expected in L2 learners . Note, however, that the GEN 1 MUMS also showed evidence of successful acquisition. For example, they realised English /p t k/ within the long-lag VOT range, and made a clear cross-linguistic distinction between Sylheti and English /k/ . The GEN 1 MUMS' Sylheti stop productions, in turn, largely conformed to those of native Sylheti speakers in Asia and recent arrivals in the UK . Accordingly, their /bʱ/ and /gʱ/ were consistently realised as voiced breathy stops, and their coronal stops appropriately as dentals and retroflexes. Moreover, they produced the majority of their voiced stops with a target-like voicing lead. Interestingly, however, there are some indications that their Sylheti stop productions may not be entirely native-like. For example, unlike McCarthy et al.'s study, in which first-generation Sylheti speakers and native controls produced Sylheti voiceless stops with short-lag VOT values, the GEN 1 MUMS' productions were much longer. To some extent, methodological differences between the two studies can explain these patterns: in McCarthy et al. the target words were embedded in the middle of a carrier sentence, while they occurred at the beginning of a carrier sentence in the present study. This may have made it more likely for participants to treat them like items in citation form, resulting in longer VOT values . However, their mean values for Sylheti /t̪ / and /ʈ/ are much too long for this to be the only credible explanation. Moreover, despite predominantly prevoiced realisations of Sylheti voiced stops, the GEN 1 MUMS produced a fair amount of tokens within the short-lag VOT range , and hence differently from native control speakers . These patterns indicate that not only their L2 stops were non-native, but also some of their L1 categories, suggesting bi-directional interactions. While current theories of bilingual speech learning, such as the SLM , take account of these effects, they cannot explain why L2-to-L1 transfer only affected the first-generation migrants in the present study, but not in McCarthy et al. . A possible explanation might be differences in social structure across the two communities studied: first-generation migrants in Tower Hamlets are regularly exposed to native Sylheti speech by new arrivals, reinforcing homeland norms, while there is virtually no influx of new arrivals from Bangladesh in the Cardiff community. However, social variables of this kind do not currently form part of formal bilingual speech learning models. The GEN 2 MUMS, in turn, produced all English stops at the correct place of articulation, including the coronals, and were hence more accurate than the GEN 1 MUMS. Note, however, that they realised 40% of their voiced stops with a voicing lead, which suggests a subtle influence of their heritage language. As such, the findings obtained here add to the few existing studies that have shown non-native patterns in the host language of second-generation speakers . In these studies, non-native speech was associated with continued high use of the heritage language. In contrast, the GEN 2 MUMS had relatively low use of Sylheti in the home and community, and clearly considered themselves dominant in English. This suggests that high L1 use may not be an absolute prerequisite for non-native patterns in the host language, in particular in subtle areas of pronunciation, such as prevoicing, which has limited perceptual salience and can occur in native English speech, albeit in smaller proportions. The GEN 2 MUMS' Sylheti stop productions also showed evidence of successful acquisition. For example, they exhibited similar mean prevoicing values for their voiced stops as the GEN 1 MUMS. On the other hand, in contrast to the latter, they commonly realised Sylheti coronals as alveolars, and produced some tokens of /gʱ/ as [g]. Differences of this kind between first-generation migrants and second-generation heritage language users are well attested , and have been explained in a number of ways. Chambers claimed that second-generation speakers have an innate accent filter that blocks non-native features in the host language. However, this claim is undermined by evidence that non-native features do occur in the host language , and has been largely discredited by Sharma & Sankaran's work. A more plausible explanation is a socially oriented approach according to which an individual's speech patterns are the result of "network, demographic and intergroup forces" . While these factors have not been investigated in a detailed ethnographic study here, there are clear differences in linguistic experience and language use across the generations. Accordingly, the GEN 1 MUMS spent their formative years in Bangladesh, live in close-knit communities with many other Sylheti speakers, and use Sylheti as the main language in the home on a daily basis. In contrast, the GEN 2 MUMS have either never been to Bangladesh, or only spent short periods of time there to visit family members. They live in areas of Cardiff that are ethnically heterogeneous with few opportunities to use Sylheti, and they predominantly use English in the home. --- Acquisition patterns: children Both sets of children produced the English stops accurately, with /t/ and /d/ consistently realised at the alveolar place of articulation, and voiceless stops with long-lag VOT values . Interestingly, the GEN 2 CHILDREN prevoiced some of their English /b/ and /d/ productions , while the GEN 3 CHILDREN produced all their English voiced stops within the short-lag VOT range. Since the extent of prevoicing conforms closely to that reported in previous work on English monolinguals , the GEN 2 CHILDREN's patterns do not suggest cross-linguistic interactions. Their Sylheti stop productions, in contrast, showed substantial differences. For example, the GEN 2 CHILDREN were entirely accurate in their production of /bʱ/ and /gʱ/, while the GEN 3 CHILDREN commonly produced these categories without breathiness. The GEN 2 CHILDREN also produced more target-like coronal stops than the GEN 3 CHILDREN. Finally, the two sets of children differed in their voicing patterns. Specifically, although both groups predominantly realised Sylheti voiced stops with short-lag VOT values, the GEN 2 CHILDREN exhibited a moderate level of prevoicing , while it was virtually absent in the GEN 3 CHILDREN . How can these patterns be explained? To begin with, developmental factors could be at work. Indeed, it has been shown that prevoicing is acquired late in monolingual and bilingual development since it has limited perceptual salience and is articulatorily complex . Hence the children's lesser degree of prevoicing compared with that of the adults points to a developmental explanation. However, the prevoicing patterns observed cannot solely be explained in this way. After all, if that was the case, the two sets of children should have exhibited similar patterns, considering they were matched in age. The GEN 3 CHILDREN's virtual absence of prevoicing coupled with a number of other English-like patterns in their Sylheti stops suggest that other factors are at work, as well. The most likely factor is linguistic experience. Specifically, the GEN 2 CHILDREN are growing up in a home where Sylheti is the dominant language and they live in an area that is densely populated with other Sylheti speakers. In contrast, the GEN 3 CHILDREN mainly hear English in their homes and there is substantially less Sylheti spoken in their immediate environment as they live in an ethnically more heterogeneous area. Finally, the differences between the two sets of children may be related to the input they receive. While input was not assessed directly in this study, based on an analysis of the children's mothers' productions, the GEN 2 CHILDREN may largely hear target-like productions from their mothers, while the input that the GEN 3 CHILDREN receive is likely to include a number of non-native features. The latter may not be significant in contexts where there is sufficient native-like input from other speakers. However, in contexts of reduced input, as in the present case, nonnative patterns may be influential for the next generation of speakers. Mayr & Montanari , for instance, showed that multilingual children who only receive input in one of their languages from a single speaker are highly responsive to their patterns and home in on speaker-specific phonetic information. Hence, since the GEN 3 CHILDREN have restricted exposure to Sylheti, their mothers' non-native productions may be partly responsible for their own non-native realisations in the heritage language. It is important to note that the differences between the two sets of children cannot be explained on the basis of current models of bilingual speech learning. The PAM-L2 does not take any social variables into account, and the only one that is formalised in the SLM is age of learning 4 . However, the GEN 2 CHILDREN and the GEN 3 CHILDREN did not differ on this variable as they had both been exposed to Sylheti from birth. --- Socio-cultural factors As reviewed in the introduction, a growing body of research has shown that heritage language features may occur in the host language to fulfil socio-indexical functions . For example, Kirkham showed that British Asians used retroflex realisations of /t/ in English to signal their Asian identity. In the present study, only the GEN 1 MUMS used clearly nonnative forms in their English. It is uncertain whether they have only arisen from inadvertent interaction between the L1 and L2 sound systems, in line with previous research on L2 learners , or whether they have also been mediated by social factors. But what about the other participant groups? How can the absence of heritage language features in their productions be explained? To begin with, they have been shown to emerge for the first time in adolescence and young adulthood, the most critical developmental periods for identity formation . The absence of heritage language features in the speech of the 3-5 year-old children is hence not surprising. It is less obvious, however, why the GEN 2 MUMS showed no evidence of heritage language forms in their English. One possibility is that they do use them, but only in informal contexts. Since the present study only assessed their productions in a formal experimental setting, this possibility cannot be addressed by the data gathered here. Alternatively, the use of these forms may be related to socio-economic status . Hence, anecdotal evidence from members of the community suggests that heritage language forms, in particular retroflex realisations of /t/ and /d/, may be associated with low levels of education and SES. The GEN 2 MUMS, however, were all well-educated with the majority holding university degrees and employed in professional posts. Nevertheless, in the absence of detailed ethnographic data, this explanation remains speculative, and requires systematic investigation in future research. --- Language maintenance and transmission In the present study, only first-generation migrants were identified as clearly non-native in the host language. In contrast, the heritage language showed incremental changes across successive generations: the GEN 1 MUMS' Sylheti stop productions, while not identical, were close to those of Sylheti speakers in Asia, those of the second-generation participants showed an increase in non-native forms, while the GEN 3 CHILDREN's productions were the least target-like. These patterns are in line with those observed in the Russian and Ukrainian heritage speakers described by Nagy and her associates . As in the present study, they found a crossgenerational trend away from homeland norms and towards those of the host language. These patterns could suggest the emergence of a new contact variety, as has been argued for other British Asian communities . However, considering the changes observed across successive generations, Cardiff Sylheti would be a highly unstable variety with unclear norms. Moreover, its long-term survival is uncertain. Accordingly, English is the GEN 3 CHILDREN's predominant home language and there are few opportunities to use Sylheti in the neighbourhood. In addition, unlike the close-knit communities described elsewhere ), there are virtually no new first-generation migrants joining the community that could help maintain the heritage language, and introduce homeland norms. On the other hand, the GEN 3 CHILDREN were able to converse in Sylheti and carry out a picture-naming task, suggesting reasonable linguistic abilities in the language overall, although this would need to be confirmed in a systematic study of their lexical and grammatical proficiency. They also showed clear evidence of speech learning despite converging patterns. For example, some of their /bʱ/ and /gʱ/ tokens were produced with target-like breathiness, and some of their coronal stop tokens were produced at the correct place of articulation. It remains to be seen whether these factors are sufficient to ensure the long-term survival of Sylheti in the community. --- Conclusion This study is one of the few to investigate the speech patterns of heritage language speakers in both the heritage language and the host language. It showed differences in the production of Sylheti and English stops across generations, and between child and adult participants. As such, it constitutes an extension of previous work on the speech of Bangladeshi heritage children and adults in the UK ), and is the first to examine the Sylheti stop productions of UK-born children. It also demonstrates that non-target-like stop productions are not only manifest in voicing patterns, but also other areas of pronunciation, most notably the place of articulation of coronal stops, and breathiness in voiced categories. In the present study, these areas were assessed auditorily. Future work could complement the findings obtained here with additional acoustic measures, e.g. spectral analyses of stop bursts, and measures of intensity. These will require normalisation procedures to adjust for differences in vocal tract size , and need to include native control groups. This study is also the first to reveal substantial differences between second-and thirdgeneration children in heritage language settings, with the latter exhibiting an increasing drift towards the patterns of the host language. These findings have important implications for the maintenance, transmission and long-term survival of heritage languages, and show that investigations need to go beyond second-generation speakers, in particular in communities that do not see a steady influx of new migrants. Future work is needed that builds on this research and examines systematically what factors contribute to successful transmission and maintenance of speech patterns in heritage language settings. Finally, this study has important implications for theory and demonstrates that current models of bilingual speech learning cannot fully account for the speech patterns found in heritage language settings. Future work will need to extend these from their current focus on psycholinguistic processes and incorporate the social variables that mediate them. --- Notes 1 In some of the literature on Sylheti and similar Indo-Aryan languages, the distinction in voiced stops is referred to as one of aspiration . In the present paper, we use the term breathiness, however, to distinguish it from the aspiration found in long-lag voiceless categories. 2 Note that dental realisations of English /t/ and /d/ were not classified as target-like since they do not occur in Cardiff English . 3 In the confusion matrices in Tables 3 and5, the stops in slanted brackets on the left denote the intended categories, while those in square brackets denote target-like and non-target-like realisations. 4 Note that although Flege and his associates have demonstrated the importance of language use for L2 speech learning , this variable has never been formalised in the SLM.
The purpose of this study was to gain a better understanding of speech development across successive generations of heritage language users, examining how crosslinguistic, developmental and sociocultural factors affect stop consonant production.To this end, we recorded Sylheti and English stop productions of two sets of Bangladeshi heritage families: (1) first-generation adult migrants from Bangladesh and their (second-generation) UK-born children, and (2) second-generation UK-born adult heritage language users and their (third-generation) UK-born children.The data were analysed auditorily, using whole-word transcription, and acoustically, examining voice onset time. Comparisons were then made in both languages across the four groups of participants, and cross-linguistically.
Background Demographic, socio-economic and political trends throughout high-income countries have resulted in the care of older people becoming an issue of utmost policy importance. Older people have greater healthcare needs than do the general population and are at a higher risk of adverse health outcomes [1]. Furthermore, the strength of the association between migrant status and frailty has been found to be great, particularly among those from low-or middle-income countries [2]. These groups may also underuse public care services [3][4][5], and thus, the well-being of older immigrants can be largely dependent on their family carers. Informal/family carers are unpaid individuals, such as family members, friends and neighbours, who provide as much as 90% of the in-home long-term care required. Many family carers spend 4 to 7 years-and as much as 15 to 20 years-doing a job that is stress-filled, overwhelming and isolating [6]. Norway, like the rest of Europe, is witnessing significant demographic changes in its immigrant population. While only 5% of immigrants were 70 years or older in 2018, this share is expected to increase to 25% by 2060 [7]. The Pakistani population in Norway constitutes one of the largest, and longest residing, groups among non-European immigrants [8]. Aging within this group therefore raises the concern of an increased need for care and help from relatives and the question of how future formal and informal care and healthcare accessibility for older immigrants could, or should, look in Norway. --- The benefits and costs of family caregiving Research suggests that the experience of providing care may differ for immigrants and their descendants. Family caregiving can have its rewards and benefits. For example, it can lead to the appreciation of life, personal growth, enhanced self-efficacy, competence or mastery, self-esteem, and closer relationships [9][10][11]. Furthermore, positive psychological effects due to caregiving may mitigate some of the challenges of caregiving, as positive effects are associated with lower levels of burden and depression and better overall mental health [12]. However, family caregiving can also have invisible costs. Although most children feel responsible and are motivated to care for their parents or in-laws [13], there is uncertainty about their ability and willingness to assume full responsibility for such care. These doubts are hardly discussed within families, and most often, older immigrants do not want to be a burden to their adult children [3]. Thus, informal caregiving, including the circumstances of carers, the challenges they experience and their need for support from formal care services as well as other informal networks, is often hidden or even invisible. Along with the physical challenges of caregiving, informal caregiving can also have a negative psychological, social and emotional impact on carers, such as feelings of guilt, embarrassment, stress and anxiety [14][15][16]. Moreover, informal caregiving continues to be a highly gendered activity, with women performing the bulk of the family care in most cases [17]. Furthermore, chronic stress due to informal caregiving has been linked to poor health outcomes, morbidity and mortality [18]. Those born in the receiving country could face greater challenges in caregiving than do the older immigrant relatives for whom they provide care, due to having different values and/or expectations about family relationships and the expectations of older relatives more closely mirroring experiences they had in their countries of origin [19]. While there is an increasing amount of research on the health of immigrant populations in Norway [20], issues regarding ageing and care for older immigrants have received very little attention. A few studies [21,22] and reports [23][24][25] have started to provide insights into the care needs of older immigrants in Norway and the availability of care services for them, but hardly any research has so far been undertaken about the views of family carers to older immigrants. Our study aims to fill some of this gap. --- Pakistani immigrant women in Norway and caregiving The first Pakistani immigrants arrived in Norway in the late 1960s and early 1970s as labour immigrants, followed by women who migrated mainly through family reunification [26]. The issue of older Pakistani women's care is of particular concern because of their social disadvantages in the areas of education [27], employment [27] and self-reported health [20], as compared to Pakistani men. In fact, a study on older male Pakistani immigrants in Norway reported that a few participants perceived that residential care homes might be more suitable for Pakistani men than they are for women; because the latter typically spend most of their time within the confines of the household, their social world has been limited to their family [28]. Studies have shown that the descendants of immigrants often identify as both Norwegian and Pakistani, but when it comes to caring for older people, they tend to uphold traditional values from Pakistan, as filial obligations are morally compounded in culturally implicit 'generational contracts' [21,29,30]. Moreover, upon marriage, it is common for a woman to move in with her husband's family, where she might face certain expectations when it comes to the wellbeing of her parents-in-law [21]. Thus, caregiving continues to be a gendered activity, in particular for daughters-in-law. In this context, it becomes imperative to examine the perspectives and experiences of female carers, so that better care is not only provided to older immigrants in future but also to ensure the well-being of their carers. Thus, the aim of this article is to explore female Pakistani carers' views on the future formal and informal care and healthcare accessibility of their older relatives in Norway. --- Methods This article is based on qualitative interviews with family carers of older Pakistani women living in Oslo municipality, Norway. The participants were recruited through snowball sampling, starting with meeting women at a local mosque, at an activity centre and through key informants. They were followed up via phone to arrange face-to-face interviews at a later date. Our recruitment criteria for family carers were that they perceived themselves to be the primary provider of care for an older female relative or that they were primarily involved in facilitating access to formal health care by accompanying an older female relative to appointments. It should be noted that of those who identified themselves as primary carers, we only found women, despite being open to the recruitment of male carers. All participants who were approached and fulfilled the recruitment criteria agreed to participate in the study. The family carers were between the ages of 23 and 40 years. We recruited 10 family carers, out of which 8 were daughters, all born in Norway, and 2 were daughters-in-law, of which one was born in Norway. Seven carer daughters had full-time jobs, and one was a student. Of the two carers who were daughters-in-law, one worked part-time and the other was a student. All our participants' either had higher education or were pursuing higher education. We developed a semi-structured interview guide to explore the following: 1) family carers' perceptions and experiences of caring for their older mother/mother-inlaw, and 2) family carers' perceptions of residential care homes and homecare services for their relatives . The interviews were conducted by the first author, a PhD student who has experience of conducting research on topics related to healthcare among immigrant women in Norway and is an immigrant woman of South-Asian origin. A somewhat shared background with the participants helped to minimise the distance and the interviews were thus informal and interactional. Interview appointments were followed up via phone which also helped to build rapport prior to conducting the interviews. Nine interviews were conducted in Urdu and one interview was conducted in a mix of Urdu and English, based on the participants' preferences. Interviews lasted 45 min to 1 h and were recorded and transcribed verbatim. They were conducted at participants' homes or in public at cafes or parks with only the first author and participant present. Field notes were made by first author after each interview, summarizing important findings and noting reflections and suggestions for themes and patterns that were identifiable at that stage. Participants were thus probed for relevant issues that emerged. The data was analysed by thematic analysis using Braun and Clarke's six-phase guide [31]. Specifically, we performed the following steps: familiarising ourselves with the data, generating initial codes, searching for patterns or themes across data and identifying relevant themes. In the next stage, we reviewed and named the themes . Nvivo was used to aid in coding. This study is part of a larger project on older Pakistani women's access to healthcare services in Norway [32] and was approved by the official data protection body at the Norwegian Centre for Research Data . All participants received written and verbal information about the study, its purpose and provided informed written consent. Participants were also informed about the rationale for doing research on Pakistani carers, i.e. the demographic significance of Pakistanis as a group in Norway and the importance of exploring access to healthcare based on previous research. The participants were also informed about the possibility to withdraw their consent at any point in the study, without any consequences. All the participant names used below are pseudonyms. We use the term 'residential care homes' to refer to nursing homes, old age homes or communal living spaces for older people in need of care. The term 'professional homecare services' includes both home nursing care and practical assistance. --- Findings Our analyses revealed several factors influencing family care' perceptions of formal and informal care support, which are presented below in the following five themes: 1) caring for family in Norway as in Pakistan, 2) worries about being 'dropped off' at a care home, 3) concerns about being cared for by outsiders, 4) questions about what other people might say and 5) adhering to society's expectations of a 'good' carer. --- Caring for family in Norway as in Pakistan All carers reported a sense of responsibility and a desire to care for their parents/in-laws, but they had to consider what was realistic, given their busy lives, educational careers and/or professional jobs. Many felt that their mothers/mothers-in-law still had the same preferences and expectations about living with and being cared for by their children as they formerly had in Pakistan, irrespective of having migrated to Norway, as noted by a carer daughter: They have brought this [mindset] from behind [Pakistan]. They have kept their thinking exactly the same: if it is an elderly or ill person, children should first care for them. Maybe this will change with time. Thus, the carers believed that the concept of living together with one's parents/in-laws largely remains amongst Pakistanis, be it locals in Pakistan or immigrants in Norway. Some carers also pointed out how conditions in Pakistan are more favourable for caring for older people at home, as private help can be cheaply and easily hired and extended family is often nearby to help and support. They contrasted this with the situation in Norway, where there are different gender roles and where both men and women family members are employed outside the home and thus have less time for caregiving tasks. However, the lack of availability of extended support networks in Norway also made them feel more responsibility towards their parents, as described by a carer daughter: In Pakistan, it's like your husband is working and you are at home or if you are not at home, then your maternal or paternal aunts, or someone or other, is there who can provide care if you are going out or something. Here, it's not like that; here, you are the only one closest, which means that if you aren't there, nobody is there [for them]. --- Table 2 Illustration of thematic analysis --- Meaning units Condensed text/Codes Sub-themes Themes [W]hen mother was in hospital, we used to go there every day to see her and bring her food, etc. So the white people who were staying there longterm started telling their own stories, saying nobody comes to see them-only at Christmas or on other [holidays].. .. Then we understood that they long for this. .. for someone close to them to come and meet with them. Even doctors say that germulki [immigrants] are very good at such things; they care for their elderly a lot. Perception that Norwegian children do not care as much for their parents as the Pakistani children do. Perception that Norwegians desire for such type of care and appreciate immigrant children for doing the same. --- Feeling appreciated as an immigrant for caring for their older parent by Norwegians Adhering to society's expectations of a 'good' caregiver. Thus, living in Norway without extended family in the same home also evoked in carers the sense of being the sole carer for their older parents. The challenges of caregiving were further exacerbated for those who also had children. Some felt restricted in doing things or going places with their children out of consideration for their older relative. For example, a carer daughter-in-law, stated: I have children and I have to take care of them as well. If their [children's] friends come and make a lot of noise, she would say, 'my blood pressure is getting high'. Of course I don't tell the children to invite their friends. I tell them that grandma can't bear it, her head aches. My mother-in-law stays here so everyone has to come to our house. It's obvious that one has all responsibilities then, at least I have. It's not like everything can be done according to my wishes. If I have to go somewhere, I have to ask at home first. And if … we want to go for a holiday with children, we take her along. But she says that now she can't travel much. --- Worries about being 'dropped off' at a residential care home For the carers interviewed, thoughts about transitioning their parent to a residential care home brought up feelings of guilt, as they reflected on their parents' care expectations and worries of being simply 'dropped off' at a care home in future. A carer daughter noted her parents' unpreparedness and said, 'I don't think my parents will ever be prepared for a situation like that; in fact, they keep saying "do not do this to us"'. Another carer daughter expressed how her parents perceived residential care homes as the last resort, stating the following: [T]hey have a big fear of, 'don't send us there at all!' [They say,] 'It's better that we pass away before that'. They have a problem with this here [in Norway], too. So I can imagine that, with time, uncles and aunts would have problems with this. I mean, to live somewhere away from home, they will have problems, so just thinking of residential care homes scares us. We will do whatever we can ourselves. Thus, parents' fears of being sent to residential care homes also made carer feel concerned about this transition and strengthened their resolve to take on the responsibility of providing informal care themselves as much as possible. Although a few participants reflected on the possibility that a better quality of care might be provided in professional care homes than could be provided at home, they still felt uncomfortable about the idea. For example, a carer stated her own preference for having her parents live in her own home and preferred 'to have them in front of [her] eyes'. She believed that providing care at home would allow her to spend more time with her parents, instead of visiting them for a few hours each day or week at a care home. While positive perceptions about the quality of care at care homes did not seem to influence carers' preferences for care, carers also did not discount the challenges of caregiving. For instance, of this tension between parents' wishes and carers' abilities, one carer stated, 'One or the other would have to make the sacrifice. I don't think we will send them to a residential care home'. Thus, carers expressed mixed feelings of responsibility, as well as ideas of having to make a sacrifice, pointing to their own discomfort with the idea of residential care homes. A few carers, however, took into consideration the practical challenges of caregiving and reflected on when a formal care option might be considered. One carer daughter stated the following: [I]t also depends on how much time you can give them; if you can't give them any time and they are close to 80 to 90 years of age, then it's obvious that one will have to think of something for them. But then I think parents can't decide this-children should step in. It can be seen in this quote that the carer asserted the involvement of children in deciding the course of future care for parents. However, the same view was not shared by carers who were daughters-in-law, despite them sharing similar concerns about older family members being sent to residential care homes. Carer daughters-in-law, unlike daughters, expressed that they would not be the ones to decide whether to transition their mothers-in-law to residential care homes, given the relationship they shared with their older family members. For example, a carer daughter-inlaw noted the following: I think then her children would decide. Obviously, I would share my opinion, but I can't say that this is what you should do. .. because, since I live with her, obviously it's my responsibility. Sometimes I think about it, but then I tell myself that, when the time comes, we will see. They [the government] give a lot of help to people here. Thus, while concerns about older mothers/mothersin-law were shared by both daughters and daughters-inlaw, the latter did not think they could be very involved in decision making regarding future care, despite perceiving the responsibility of caregiving to a greater degree than did daughters. --- Concerns of being cared for by others at home Professional homecare services were not an optimal alternative to informal care for most of the carers either, despite the fact that such services could eliminate older parents' concerns about being simply 'dropped off' at a care home. Carers mentioned their responsibility of reciprocal caregiving as integral to their preference for informal care, and a few expressed that they did not want anyone else to take care of their parent in future. For example, a carer daughter stated, 'We do not want anyone else to take care of mother in future. When we were children, they took care of us; now it is our turn'. Carers also reflected on their parents' discomfort with the idea of being taken care of by 'outsiders', even in their own homes. For example, a daughter-in-law described how her mother-in-law might perceive this, noting the following: I do think that in future maybe if something happens, she might need home care. .. because it has happened with many of my friends.. .. Many [mothers-in-law] would not even let outsiders touch them.. .. She may agree to it because in her heart she knows how much I can take care of her. But I think it would be difficult [for her to agree]. A carer daughter also described her mother's similar discomfort with using professional homecare services: My mother refused it, saying that 'my heart doesn't want someone else to come to my house for this'. Now, we get it for my father too, but initially, they found it very difficult to adjust, as different people used to come. When they feel that someone is staring at them, they can't really be themselves. She further described that she even tried to convince her mother to use a homecare service by telling her about the possibility of having a Pakistani homecare professional, but her mother still did not feel comfortable with it. However, as the situation worsened, they had to accept the use of homecare services, despite her parents' reservations. --- Questions about what other people will say While the preference for informal caregiving mainly stemmed from feelings and beliefs about reciprocal care and children's responsibility to their parents in Norway, there also existed feelings of being judged by the Pakistani community if formal care services were to be used. A carer daughter described the larger community's perspective and how, while growing up, she used to hear about children 'abandoning' their parents: Then, people used to say 'Haiii! See what they [the children] have done!' They might be thinking that they [the children] are getting rid of them [the parents], that they don't want to care; but they don't understand how much trouble they [the children] are facing-they have a job and a responsibility to manage their home. Another carer daughter told of her dilemma when considering whether to transition her older parent to a residential care home, being torn between the 'talk' of the community and her own preferences and caregiving constraints. She noted, 'that is there. .. people will talk. Among our ger-mulkis [immigrants], this issue is there.. .. We don't want to do this either, but when constraints come. .." It should be noted that the fear of judgement from the community was not only associated with how the children might be perceived by the community but also how the parents might be perceived, as described a daughter: They [the parents] think, 'What will people say?' I mean acquaintances, family, friends-it can be anyone. This also happens a lot. For parents and for children as well. Then they feel a lot of disappointment, thinking, 'That person thought this about my children? That we haven't given proper values to them?' . .. That's why they are not accepting it right now. In light of judgement from the community, professional homecare services were also considered to be problematic, as noted by a carer: 'Maybe they will say things like, "They are not taking care of their parents. They are [forced] to take on help".. .. Maybe because of these things, we may not take on help'. Thus, some carers felt that they may decline the use of professional homecare services due to fear of social condemnation. This, however, did not mean that they could accept compensation in lieu of professional homecare services, as regulated by law. A carer daughter explained the following: Then also people will talk, like, 'See? They are doing this to get money!' Then it's obvious: we will have to take on professional home care. Because people would think that we are doing this for money, it is better that we accept professional home care if we want to escape people's gossip. 'What will people say?' This thing defeats us on every issue. A few also perceived taking compensation for care as contradictory to what a Pakistani would do. For example, a carer daughter pointed out that she would never take such compensation but that her brother might, 'because he doesn't think like us; he is more like a Norwegian'. Despite the potential to be gossiped about by others, most carer daughters still felt that it is acceptable to take compensation for informal care in lieu of using professional homecare services. However, daughters-in-law perceived it as much less socially acceptable, as explained by one: [M]any of our people think this way. .. but it's like the husband's family members would say, 'She has kept her for money'. I have heard from many people that. .. they would not get professional home care for money and [would instead] take care of them themselves.. .. [S]ometimes it's also difficult to take care.. .. It's one thing to take care of a child, but it's another thing to take care of an older person. It's stressful for us. Another daughter-in-law echoed similar perceptions and reiterated that decisions related to accepting compensation for care would also depend on the husband, despite the daughter-in-law being the primary carer: I think my husband would say, 'Leave your work and take care of her. Leave the money, etc.'. Otherwise, people might say, 'She takes money to take care of her. [That's why she] has kept her at home', and others might say, 'Yes, her relative has become ill and she has become rich'. . .. You know, the type of women who stay at home and don't work. .. all those people say it's right that she gets money now and she takes care of her. .. as it's a mentally difficult task to take care of someone. And if it's a mother-in-law, you can't say everything directly … This is what happens in a husband's family. Adhering to the society's expectations of a 'good' carer Some carers believed that the practice and value of taking care of older parents is highly regarded by ethnic Norwegians. A carer daughter noted the following: These Norwegians, they appreciate this thing a lot that we ger-mulkis [immigrants] take a lot of care of our parents. .. that, when they get old, we don't send them to a residential care home; we care for them. When asked how they felt this was appreciated by ethnic Norwegians, a carer daughter explained the following: [W]hen mother was in hospital, we used to go there every day to see her and bring her food, etc. So the white people who were staying there long-term started telling their own stories, saying nobody comes to see them-only at Christmas or on other [holidays].. .. Then we understood that they long for this. .. for someone close to them to come and meet with them. Even doctors say that ger-mulki [immigrants] are very good at such things; they care for their elderly a lot. Some also pointed out that Norwegians are often surprised that they do so much for their parents, while the Pakistani carers simply see this as fulfilling their responsibility. --- Discussion The increase in the older population of the Pakistani community in Norway raises the question of their need for formal and informal care. The aim of this article was to explore female carers' views on future care for their older relatives. One of the most striking findings is that nearly all the participants held the view that they would be responsible for their older family members' care, even though formal care options may be available. Their older relatives' wishes of living with their children and receiving care from them seems to be an expectation that the participants anticipated and felt obliged to accommodate. Traditional expectations of filial piety tend to be strongly rooted in many immigrant communities [33][34][35]. A study of Asian Americans showed, for example, that providing care for family members when they become old is often seen as a 'natural' obligation [34]. Filial responsibility is also considered, as suggested in a study about Asian family carers living in Canada, to be a cultural obligation that provides psychological rewards and personal growth [36]. Although participants in the current study generally saw providing care for their older relatives to be a duty, participants shared the view that such expectations were hard to reconcile with their life in Norway. One of the reasons for this seems to be that older relatives' values and norms relating to family relationships mirror those that are common in their countries of origin [19] but often differ from those of their younger carers-who are more accustomed to the migrated countries' norms [37,38]. Therefore, the older generation's expectations of care tend to define the norms within the immigrant communities, but these norms are often outdated even in their countries of origin [33]. Yet these norms seem to strongly inform how family carers view their ability to manage their responsibility to provide future care to older relatives. One of the difficulties participants in our study anticipated was the concern about what other people might say if they were to choose care options for their older relatives other than family care. Their fear of breaking their communities' norms of filial piety seemed to be a manifestation of the fear of social exclusion. Sanctioning members of a community for not living up to shared ideals of filial piety is not uncommon. In a study of Turkish immigrants in Belgium, Tavernier and Draulans contended that social pressure to conform to traditional family caregiving ideals occurred. Resisting or failing to fulfil the expectations of the community would bring about social exclusion and feelings of shame and guilt on the older immigrants' children [33]. The pressure to care for older relatives may therefore lead to postponing or declining to seek professional care [39], as carers themselves may perceive the act of seeking external support to be 'relinquishing their caregiving responsibility' [40]. Social pressure that stems from idealised notions of care, which are more or less detached from the new reality in the host country, makes the use of professional care services less attractive to family carers. Still, traditional views of filial piety, according to the family carers' perceptions, do not entirely determine the use of or access to healthcare services of their older relatives. This seems to be in line with the assertion of Levesque et al. [41] that access to healthcare is contingent on the cultural expectations of clients as well as the characteristics of healthcare providers. Participants in our study seemed to worry about whether their older relatives would be comfortable with the available healthcare options. They were concerned that their older relatives would feel as if they were being 'dumped' at a care home. They worried that their relatives would feel lonely in a residential care home and that they would not be able to visit their relatives as often as they would like to. With regard to professional homecare services, participants were worried about their parents' inhibitions and discomfort with being cared for by 'outsiders', even if the homecare professionals they hired were Pakistani. The literature indicates that many older people with a migration background are sceptical about care institutions [42]. Studies of older Pakistani immigrants in Norway found that they are often afraid that inadequate support is provided in homecare services and that they will become detached from their family if they choose to live in such an institution [28,43]. Meanwhile, professional carers-as Berdai Chaouni and De Donder [39] pointed out-are often unaware that immigrants often avoid formal care because of their perception or anticipation that the care provided will be culturally or religiously insensitive. The family carers in the present study also gave accounts of Norwegians praising how well immigrants care for their parents when they get old and praising them for not sending their parents to a residential care home. Although caring for older relatives is admirable, it would be naïve to explain such differences between the majority population and immigrants solely based on culture. It is therefore worth looking at the structural underpinnings that make such filial support, to say the least, impractical. As is the case in many high-cost countries, Norwegian society is based on families having two household incomes, which makes full-time childrearing while also caring for older relatives simply not feasible. Due to the traditional norms of caregiving, women from South Asian communities can face great role conflict in their attempts to juggle occupational demands and the demands of caregiving [44]. Nergård's [45] study on the topic of expectations about old age in three immigrant groups, including Norwegian-Pakistani women aged 26-40, found that women were willing to stretch themselves far in order to ensure proper caregiving within the family. While they expressed negative attitudes towards nursing homes, most of the women did not have any reservations about using services aimed at relieving the household of some tasks. The norm of family members acting as carers is increasingly becoming difficult to uphold due to changes in the family structure, as well as conflicting responsibilities and roles [46]. Although idealised perceptions of caring for older people can be found within immigrant communities [33], it is important not to yield to assumptions that children from a immigrant background will always want to 'take care of their own', as this can lead to a lack of attention being given to the adaptation of healthcare services for older immigrants and their families [42]. A worldwide demographic tendency is that women, on average, have a longer lifespan than do men [47]. Since the 1960s, immigrant men in Europe have been assigned jobs that involve hard and manual labour and have therefore been more exposed to health hazards [48]. Pakistani men in Norway have similar work trajectories [26]. From a gender perspective, this means that immigrant men are most likely to be in need of informal and professional care early in old age. At the same time, due to gender norms about caregiving in immigrant communities [33,34,36], women end up caring for both men and women, often conflicting with childrearing responsibilities. This especially affects daughters and daughters-in-law, who often, regardless of competing responsibilities, are expected to provide care to their relatives [34]. This is exacerbated by norms that dictate, for example, that being compensated for the care provided is not consistent with genuinely caring, as pointed out by participants in our study. The participants in our study, all being women, also had the perception that they would have to make sacrifices to care for their relatives and that it would not be a realistic option for them to send their relatives to a care home. According to Levesque et al. [41], the ability to seek out healthcare services relates to people's autonomy and capacity to choose, based on their knowledge of the care options available and their eligibility rights. In the study, they refer to an example of immigrant women being discouraged from seeking professional help due to, among other things, discrimination. This is especially true for daughters-in-law, as they-unlike a recipient's own children-traditionally would not be in a position to make decisions about using professional care homes. We found that daughters-in-law did not seem to play a major role in decision making regarding care options, including the decision to receive compensation for the care they provide; however, because they still carry the responsibility to fulfil the choices that are made, they often face a moral dilemma. This dilemma becomes clear when their care for older relatives coincides with their need to care for their children, a notion known as "sandwich caregiving" [49]; they often have to choose between financial security or social inclusion [33]. Such dilemmas illustrate the need for culturally sensitive healthcare provision for older immigrants and support for family carers. It is therefore important to consider whether healthcare services currently meet the needs of immigrant families and whether the way they are organised make them accessible [41]. Practical challenges such as language barriers and insensitive approaches have been mentioned in previous studies of immigrant populations, including in Norway [36,50]. Healthcare professionals may even be aware of immigrants' underutilisation of these services, but they often assume that immigrants will not choose formal care and thus do not target these groups systematically. Such attitudes from healthcare providers may lead immigrant families to consider alternative care options, such as the use of unofficial domestic helpers and care marriages [39]. Although carers in our study expressed their relatives' concerns of being cared for by 'outsiders', they nevertheless anticipated their own challenges of informal caregiving. Thus, meeting older immigrants' need for culturally sensitive care, both through political action and actual organisational changes in healthcare structures, may improve immigrants' access to care and make the choice of formal care more realistic. Our findings also revealed carers' own desire to care for their older relatives even though they believed that the Pakistani norms of family caregiving did not reconcile with life in Norway. Therefore, considerations about access to formal care also need to take into account traditional values of caregiving, not in order to reify them but to understand and even challenge them. This is even more important when older immigrants wish to 'age in place' without alienating their relatives, as taking traditional values for granted results in insufficient access to formal care, which undoubtedly increases families' care burden. --- Limitations of the study This study sheds light on the perceptions and experiences of female Pakistani carers, a topic which has so far been insufficiently explored. While it provides rich data, there are some limitations to this study. Firstly, this study is based on data collected through interviews with carers. Therefore, we only had access to information on their perceptions and experiences. Use of other methods such as observation or focus group discussion would have potentially given data on their behaviour as well responses in a group setting, which would have further enriched the findings. Secondly, since we had only one participant who was not born in Norway, we could not explore the difference in perceptions and experiences between carers who were born in Norway and those who were born in Pakistan. Similarly, all our participants had higher education, were employed either part time or full time or were pursuing higher education. Therefore, our findings are specific to carers with a similar educational background. Another limitation of this study relates to the interviewer's insider position with regard to a somewhat shared culture, language and gender. While having an insider position helped in building rapport with participants, it also presented challenges of assumptions of shared understanding [49]. However, frequent discussions with co-authors from different backgrounds, locally and internationally, and writing field notes helped us to discuss understanding behind certain interpretations. Finally, we only spoke with female carers, which reflects that caregiving duties are disproportionately distributed between men and women and that the burden of caregiving often falls on women, as previous research has indicated [17]. However, it is possible that our recruitment strategy failed to reach out to male caregivers or men who identified themselves as primary caregivers, or that women were more likely to agree to participate since the interviewer was also a woman. The snowball method we employed may also have meant that having interviewed women at the beginning, there was a greater likelihood of being referred to other women carers. --- Conclusion This study gives a voice to female Pakistani carers and shows that traditional expectations of filial piety make informal care more appealing for immigrants. Due to gender norms, this usually means that children, and especially daughters and daughters-in-law, assume carer responsibilities. When healthcare systems lack cultural sensitivity, immigrant families and their carers have limited care options in old age, which in turn exacerbates families' care burden. Healthcare professionals and policymakers should not assume that immigrant families will 'take care of their own', but instead should secure adequate support for older immigrants and their family carers. Such measures would help to tackle their support needs while they are still minimal, reduce the burden on public care services in the long term and improve the quality of life and ability to cope among family carers of older immigrants in Norway. There is, however, a need for additional research to understand various immigrant populations' experiences and unique challenges in accessing and using health care. Attention should also be paid to issues such as the carers' social and financial background, their relationship with the older relative, and their place of birth. Most crucially, healthcare systems should be scrutinised to identify the obstacles that may exist for older immigrant populations in accessing the care to which they are entitled. --- --- Authors' contributions SA and JD did the conceptualization, analysis, writing of the original draft, reviewing and editing of the manuscript. AB, BR and MS were involved in writing, reviewing and editing of the manuscript. All authors have read and approved the final manuscript. --- --- --- Competing interests The authors declare that they have no competing interests. ---
Background: The aging of Pakistani immigrants in Norway raises questions related to their increased need for care and help from relatives, as well as those concerning what future formal and informal care and healthcare accessibility for older immigrants may look like. The hidden nature of family caregiving means that the circumstances of carers, their views and their dilemmas related to future care are largely invisible. In this study, we explored female Pakistani carers' views of future care and healthcare accessibility for their older relatives in Norway. Methods: Our data included interviews with family carers between the ages of 23 and 40 years old, living in Oslo, Norway. We recruited ten family carers, out of which eight were daughters and two were daughters-in-law. Interviews were conducted by the first author in Urdu or English and were recorded and transcribed verbatim. Results: Our findings revealed several factors that influenced participants' perceptions about formal and informal caregiving, which can be organised into the following themes: 1) caring for family in Norway as in Pakistan, 2) worries about being 'dropped off' at a care home, 3) concerns about being cared for by outsiders, 4) questions about what other people might say and 5) adhering to society's expectations of a 'good' carer.Family carers' traditional views of filial piety do not entirely determine the use of or access to healthcare services of their older relatives. There is a need to develop culturally sensitive healthcare systems so that immigrant families and their carers have more options in choosing care in old age, which in turn will ease their families' care burden. Healthcare professionals and policymakers should not assume that immigrant families will take care of their own older members but should instead secure adequate support for older immigrants and their family carers.
Introduction Higher education is considered as significantly important for the young generation, and also it is regarded as an important strategy to ensure the participation of the students with disabilities in this education system . Residential life in an inclusive campus facility can be an excellent opportunity to the disabled students for social engagement and it will enhance a sense of belongingness for them in campus life . Disability is a physical or mental impairment of a human being that limits his/her normal functioning of life. It may be described as a complex form of deprivation. Disability involves dysfunctioning at one or more levels of physical function, individual activity or social participation . According to the recent law concerning to ensure the rights and protection of the persons with disability, Rights and Protection for the Persons with Disabilities Act which ratified the United Nations Conventions on the Rights of the Persons with Disabilities , characterized the disability type as follows: Autism or autism spectrum disorders, Physical disability, Mental illness leading to disability, Visual disability, Speech impairment, Intellectual disability, Hearing impairment, Deaf-blindness, Cerebral palsy, Down syndrome, Multidimensional disability, Other disability. In higher education, individualized accommodation of students with disabilities is the norm. Though individualized accommodation may seem like the best practice, it is often costly, timeconsuming . One of the important arrangements for the university students is appropriate residential facility . The accommodation of students with disabilities at university is a complex process because it involves many concerns and regulations . A lack of accommodation can impact the learning and attainment of students who depend on supports and services to pursue their studies. Moreover, personalized accommodation relies on students disclosing their individual accommodation needs . Residential living is needed for independent living of the disabled which doesn't disable them further. The diversity and integrations of accommodations for the students with disabilities are almost enormous because of the variety of nature and severity of the students' disability, the physical topography of the institutions, the specific physical facilities the students will use . Students with disabilities have additional needs of living on their own and dealing with the disability in an educational environment. The daily living tasks of the individuals with disabilities are more complicated than students without disabilities . The limitation of choosing the facilities may result separation, isolation and exclusion to the students with disabilities . Most of the facilities for the students with disabilities are focused on their academic needs and there is not enough addressing of their other needs . The students with disabilities face barriers from university provided accommodation . A residential student's experience on campus is mostly related to their experience living with a roommate . Aside from infrastructural issues the students with disabilities may face problems with the assignment of roommates . Since there is a shared space amongst multiple students, there may have some challenges which come from using the shared space together . Understanding how to share a space with roommate can bring comfort of living and it allows for peaceful and friendly living together with roommates . The students may need assistance from a roommate or other students depending on their disabilities. Although disability is one of the known major social and economic phenomena in Bangladesh, there is very little reliable data available on this issue, especially in the absence of a comprehensive national survey on persons with disabilities . At the national level, the Government of Bangladesh has also initiated a number of policies and legislative frameworks to ensure the education of all students including those with disabilities . Due to the lack of current scholarly studies, it can't be understood by the policy makers and the other responsible actors about the experiences of the SWD who are included in the higher education system. Their experiences at institutions of higher education need to be understood so educators, policymakers, and researchers can make informed decisions that increase degree attainment . There is also very little research on the effectiveness of postsecondary accommodation Supports . Without understanding disability and the needs of disabled people the situation cannot be improved to solve a problem; it first needs to be understood . Contemporary research has failed to understand the lived experiences of students with diverse disabilities in higher education using the students' own voices . Residential experience is a significant element of their overall university experience and these students get negative experience. Expanding on the research within this specific area would provide residence life professionals insight into what they can do more to prepare the stuff for them and support students living with roommates . Residential stuffs are needed to be trained so that they can understand the resilience and abilities of the SWD. In this case further research is needed on the issue of administrative response and on the difficulty in residential living environment . Additionally, more research is needed to provide a better understanding of higher education institutions' barriers and facilitators to success, as the current literature is limited. For example, several studies examined what supports are offered to students, but little research has been conducted regarding the effectiveness of services and the impact of those supports on academic success There is very limited research to explore the experiences of the residential lives of the SWD in higher education of Bangladesh. The study is an opportunity to explore the campus residential lives of SWD and find the recommendations to improve the situation. --- Objectives The specific objectives of this study are to: 1. examine the residential facilities in the halls for the students with disability in Dhaka University 2. identify the challenges of residential lives faced by students with disability in Dhaka University --- Methods To explore the residential facilities and understand the residential living experiences of the SWD, the researcher adopted qualitative approach in this study. The participants of the study were the current residential students with disabilities and the house tutors of the residential halls of Dhaka University. In this study purposive and convenient sampling techniques were used to select the sample. The researcher selected three types of samples in this regard. 12 residential students with disabilities were selected conveniently and purposefully. These 12 participants are from different residential halls, different course of study, different level of study and different types of disability. Moreover, among the 12 participants, 9 male participants and 3 female participants were selected. Besides, 2 house tutors were selected with the purpose of addressing both of male and female residential halls. On the other hand, 8 residential halls were selected as sample with the purpose of addressing equal number of male and female residential halls. In this study, the researcher had used two types of instruments to collect the data which are: indepth interview and observation checklist. The aim of this study was to explore the residential facilities & experiences which would require hearing their voices. Besides, the house tutors' expression will be helpful to understand the authority's perspectives. Also observing the existing facilities in the residential halls for the SWD would provide the real scenario. Due to this nature of the study Semi-structured questionnaire is used in in-depth interview. Besides, structured observation checklist was developed to identify the existing facilities in the residential halls for the students with disabilities. In-depth interviews were conducted in several days. All interviews were conducted in-person. During conducting this study, proper consent was taken from each respondent before data collection. Before data collection, each respondent was briefed about study purpose and how it will be used. Observation checklist was developed following the ADA checklist for existing facilities . Structured checklist was used to observe the selected residential halls of Dhaka University to identify the existing residential facilities for the students with disabilities. The data from in-depth interview were analyzed by thematic analysis approach and the data from observation checklist were analyzed by descriptive analysis approach. As the study is qualitative, the data from observation checklists were analyzed through the description where the researcher used those data as secondary and supplementary data to cover the gap of primary data. --- Result and Analysis Analysis of the in-depth interviews from the SWD and house tutors and the data from observation revealed several themes related to the experiences the SWD face in their residential lives. Seven major themes were identified and some of them contained sub-themes which are: Accessibility in the hall, Room facilities, Washroom Facilities, Acceptance in the room, Attitude of the staff and hall authority, Canteen facilities, Academic facilities, Impact on daily life and Impact on academic life. Accessibility in the hall: Many respondents commented that most of the residential halls don't allot students with disabilities except a very few halls. The hall authority shared similar opinion in this case. They acknowledged that most of the halls are not ready yet. They also disagreed to the respondents' complain about not switching the hall. Many respondents shared to the researcher that they couldn't enter the hall because the hall was totally inaccessible for them. Following this issue respondent-9 told that, There was no ramp, no toilet accessible to enter with wheelchair. I had to go to the hall office for many days and after 6 months later they finally built the ramp and the accessible toilet. Only male respondents shared that they had to wait for a long to get the seat in the hall. On the other hand, the female respondents shared that they didn't have to wait for a long to get the seat. The house tutors shared that they always provide the seat to the SWD as fast as they can. The male respondents with visual impairment commented that they had to face difficulties during the orientation with the building. The house tutors didn't fully agree with this compliment mentioning that the SWD mostly get orientation from their peers and they didn't ever request to the hall stuff for this. In the observation it is found that half of the halls have at least one route that doesn't require the use of stairs. Half of the residential halls have barrier free access for the students with visual impairment. --- Mobility barriers: Most of the male and female respondents shared that they have to face barriers mobilizing in the hall. The hall corridors and walkways are not obstacle-free and it crates problem while moving. Also, there is no special indicators to help them to find the direction. Respondent-10 said, In the corridor many students keep their cycles and bikes. That's why I've to face problem while moving through the corridor. And there is no tactile sign to indicate the ways of canteen and toilet, in the stairs to go upper floor. The house tutors also acknowledged this issue as a great barrier of the SWD and commented that most of the hall infrastructure are built without thinking of them. They also shared that the other students are mostly responsible to create obstacles in the corridor and hall pathways. Besides, the observation revealed that the hall pathways and corridors were not obstacle free. --- Room facilities: The living rooms of the halls aren't fully accessible for the students with disability. Most of the male and female respondents in the interview shared common pattern of experiences in this issue. The room size is very small and it is not comfortable to live with a lot of roommates. Besides, it limits their movement and their privacy. Respondent-5 explained, The room is designed for only 4 students. But 8 students live in here, so the size of the students comparing to the room size is double. The room is very noisy and it makes difficulty to study in the room properly. The house tutors admitted that the room facilities are equal for every student in the hall and there is no special facility for the SWD in the room. They added that the hall authority always concerns about the room environment. In the observation it is found that most of the rooms are congested and overpopulated. Washroom facilities: There is a very limited facilities in the washroom for them in their hall. Only a few male and female respondents said that they get special facilities in their washroom. There is no tactile sign in the washroom for the students with vision impairment. Some of the toilets have high commodes but the doors are not accessible to enter with wheelchair. The male respondents also commented that the washroom management is very poor and the toilets often get dirty. Respondent-9 said, Only one toilet and one bathroom are special in this entire hall which is accessible for wheelchair users. So every day I have to wait to enter the toilet or bathroom because those are occupied by the general students. Both of the house tutors in their interview shared that there is no special washroom in their hall. They explained that the SWD can use the general washroom like the others and they don't receive any objection from them. They added that the washrooms are unusable for the other students' deeds. In the observation it is found Only 2 halls from 8 halls have wheelchair accessible washrooms. The observation showed that the door of the washroom is easily reachable in 6 halls out of 8 observed halls. Only 3 halls have high commode in the toilet. Acceptance in the room: Most of the respondents shared their bitter experience about their roommates as they didn't want them in their room. Some male respondents said that they have separated room only for the students with disability in their halls. The female respondents shared that they are living with their non-disabled peers in the room. Respondent-6 said, At first the girls in my room didn't take me positively. They thought they have to do all of my works by themselves. That's why they didn't like me at first. But when they realized that I can do my works by myself, they started to be normal with me. But we have a gap between us. The house tutors disagreed with the compliments of the students in this issue. They shared that the SWD mostly live in the room with their familiar faces. Besides, few female respondents shared that they had a good experience living with the roommates. --- Canteen facilities: There is no special facility in the canteen for both of the male and female SWD. Some respondents also shared that the canteen staffs concern about them and they give them priority while serving the meal. Respondent-6 explained, There is no mentionable special facility in the canteen, but it is accessible for me. The staffs of the canteen give me priority. I don't have to wait for the seat in the canteen; they manage it within a short time. The house tutors added that the hall authority provides the direction to the canteen to serve the SWD with care. In the observation it is found 6 halls don't have any special seating arrangement in the canteens for the SWD. Academic facilities: Many male and female respondents commented that there is a very limited access in the reading room. As the students have to be silent in the reading room, the students with VI can't do their work in there. Respondent-3 explained it, We have just one reading room for all of the students. We all have to be silent in that room so that everyone can concentrate in their study. But I can't be as I have to record the audio from the books and listen to them. The respondents with PD shared that they don't have to face any problem to get access in the reading room. All of the respondents with VI shared that there is no brail book, audio book and no special software in the cyber center of the hall library. Both of the house tutors commented that the hall libraries are not rich in its collection of resources and the SWD don't demand to them for the special resources. In the observation it is identified that the hall libraries are not completely disability oriented and the resources are limited. --- Attitudes of the staff and authority: Almost all of the respondents are not satisfied with the behavior of the staffs. All of the respondents shared that the staffs and other authoritative persons of the hall aren't disability oriented. Respondent-10 explained, When I first entered the hall, I got a huge shock to see that there are separated rooms for us! All of my life I never had to live separate from the others. But in hall the authority just excluded us from the others. Both of the house tutors disagreed with the compliments of the students in this issue. The house tutors also acknowledged that the hall administration can't always take the immediate steps because of the administrative complexity. House tutor-1 said, We give the direction to the hall staffs to manage the SWD in any helping needs. The staffs are very friendly. We never get any complain about them from any students. On the other hand, the staffs complain to us about the students misbehaves. The house tutors added that the hall authority cannot always fulfill their demands because of the funding issues. --- Impact on academic & daily life: Most of the respondents identified some issues related to their residential lives that effect on their academic performance. Some male respondents with VI expressed to the researcher that they can't study well in the room because there is no one in their room to help them. Respondent-6 said, All of my roommates are like me. They can't help me with the recording. So, I have to find the student without disability to record the audio for me all the time. The respondents shared that their mobility barrier plays as a vital barrier in the hall life. Respondent-4 stated that, Sometimes I have to move with the fear of falling down as I don't use wheelchair. Because of this problem I don't want to go outside to meet with my friends. The female respondents said that they don't feel comfortable to sleep with the others. Some of the respondents shared that they try to be used to with the situation they have. There are some contradictions found in narratives between the of the students and the house tutors in some cases. The students generalized that the hall authority doesn't concern to their issue. On the other hand, the house tutors expressed that they are concerned and they always want to listen to the demands of the students. Instead, they encountered the students for not being aware of the existing facilities in the halls and they don't request for any demand. In this issue, it can be understood that there is a communication gap between the students and the hall authority in addressing the needs. As the students don't know about the facilities the halls can offer, they just get used to live without it, but they rarely demand for any kind of modification in the existing facilities. Therefore, the objections of the SWD against the hall authority for not concerning their needs are not found completely valid. In additions, the students' lack of knowledge and unwillingness to seek their needs impact on the further improvement of the residential facilities in the halls. However, ultimately the SWD face challenges in their residential lives in the halls because of these issues. --- Discussion The guideline made by the National Educational Association of Disabled Students provided the conceptual framework of the study that indicates a deep insight identifying the standard level of facilities for the students with disabilities in university. The findings from the study showed that there is a very limited facility in the residential halls for the students with disabilities. The university as well as the residential halls don't have any information in their website concerning the services. Comparing to the model of accessibility, there is no disability student service to provide services to the SWD in their campus life. The study found that the student size comparing to the residential arrangements in the halls are insufficient. In the model it is directed that the hall should have enough accommodation and if not, they have to mention it to the students as well as the residential facilities they have. But the findings explored there is no such informational services in the entire halls. Besides, Tinklin, Riddell & Wilson acknowledged in their study that some of the higher education institutions provide some special facilities for the SWD but those aren't sufficient for them. Another study conducted by Wolanin and Steele showed that the accommodations are not sophisticated for the SWD sometime. This study also identified that the institutions do not agree with the students to modify any accommodation due to the student's requirement as they have the difference of thinking to implement any modification in the institution. At this point, it is identified in the study that some of the halls have ramp facilities, but those ramps weren't well constructed to make it comfortable and smooth for the wheelchair users. Some of the halls have lifts but those lifts aren't always open. It defined the unsophisticated administrative response to the demand of the students. In addition, the SWD have to live separated in the room from the students without disability. This clearly indicates the communication gap between the two stakeholders meeting the individual demands. In this study, it is identified that the room facilities for the SWD in the residential halls aren't sufficient. Johnson found that providing accessible environment in the institution campus is restricted by the architectural and budgetary constraints. It also explored that architectural barrier is unavoidable sometimes as the building structure and the accommodation services weren't designed for the students with disabilities to provide them accessibility in the building. Another study conducted by Marshak, Wieren, Ferrell, Swiss & Dugan explored that adjusting to any unfamiliar institution environment is a challenge for the students with disabilities as they don't have any kind of orientation in the campus environment. The present study showed that the other students in the room don't accept the SWD as their roommate in many cases. Besides, they face lack of privacy as they have to share their bed and the other furniture in the room. Ahmad explored that the students with disabilities feel excluded in the campus environment because they face inaccessibility in anywhere in the institution. This study identified that the students continue to face many physical barriers to educational services such as lack of ramps or elevator, heavy doors, inaccessible washrooms and inaccessible transportation in campus surroundings. There are some detailed factors identified in the present study that aren't found in the previous study; the washrooms aren't sufficient for the students living in the hall because the halls are actually overloaded with the students. Besides, it can be interpreted that the hall authority is very weak of managing the facility they provide for the students living in the hall. Muzemil examined that the inaccessibility in physical environment in campus adversely effects on the success and full participation of students with disabilities in their education. The present study found that most of the residential halls don't have appropriate environment for academic success. It can be identified as unique and different situation that varied from the past research findings. The barriers limit the student's daily living activities. The mobility barrier limits the daily movement around the hall as well as the other activities including getting the meal from the canteens, using washrooms and living in the room comfortably. Gilson & Dymond had similar findings in their study showing that the students face challenges in their daily life activities because of the residential limitation in the campus environment. This study found that the dormitories don't have sufficient numbers of lift and enough space to move; also, there aren't enough ramps. Wright & Meyer also explored that the mobility barriers effects on the daily living activities of the students in their academic lives. While comparing the findings from the present study and the previous one, it can be understandable to identify the challenges the students face in their daily life activities. As the present study is different from the previous studies, it can be a reason for identifying some new factors that make challenges in the residential lives of the SWD in the residential halls of Dhaka University. The major limitation of this study is, it may not reflect the challenges faced by the SWD in all higher education institutions in Bangladesh. Because this study only focused on the residential students of Dhaka University and it is not rational to make a generalization about the challenges of residential lives of the SWD in higher education institutions in Bangladesh. As the study only focused on the residential SWD of Dhaka University, it couldn't provide the information of residential living challenges of non-resident SWD of Dhaka University as well as all of the other higher education institutions. --- Recommendations & Conclusion The study emerged some recommendations for the University of Dhaka case with the aim of improving the residential lives of the SWD. Firstly, the residential building should be modified to make it disability oriented where the infrastructural needs of the SWD would be addressed. Besides, the new residential buildings are needed. Secondly, there should have a policy regarding seat distribution for the SWD and establish authoritative figure who would look after the seating arrangement and other facilities for the SWD. Besides, the respective personnel would connect between the hall authority and the students to meet the needs and demands accordingly. Thirdly, there should have disability orientation & awareness program for the hall staffs and the resident students. Fourthly, the academic facilities in the hall should be enriched with variety of resources and technologies. Though the study only focused the residential lives of the SWD of University of Dhaka, the study also proposes some recommendation for the higher education institutes of Bangladesh. In this regard, the infrastructural facilities should be modified or initiated with the consideration of the needs of the SWD in the institutes. Besides, proper maintenance and monitoring is needed to look after the needs of the residential SWD. Also, the universities should make the policies for residential accommodation for the SWD and establish a responsive wing to look after their needs and demands. Moreover, the universities should arrange disability awareness campaign to sensitize the personnel which may allow the SWD to enjoy the inclusive environment in their university residential life. This study has attempted to describe and identify the challenges of residential lives of SWD in Dhaka University. After interpreting the experiences and the observation it can be concluded that the students with disabilities face many challenges in their residential lives and there is a clear gap between the SWD and the hall authorities which is significant while meeting the needs. As the context of the study is different from the context of previously conducted researches, it provided some unique insights of the challenges the residential SWD face in their campus lives. Finally, it can be concluded that these challenges identified from the research will help the stakeholders to take the proper steps to eliminate the challenges and ensure inclusiveness in higher education institutions of Bangladesh.
This qualitative study aims at identifying the existing residential facilities and exploring the living experiences of the students with disabilities of Dhaka University. The students with disabilities and the house tutors of residential halls of Dhaka University were the respondents for the study. Findings from in-depth interview are extracted through thematic analysis which indicate various issues, limitations and areas of improvement. The findings are presenseted through several themes such as accessibility and mobility in the hall, facilities within the room, canteen and washroom, acceptance in the room and hall library, attitudes of the staff and authority. The study also investigates how these experiences affect the personal and academic lives of the students with disabilities. Based on the findings of the study the university authority can modify its ongoing reformation plan to create a more learning-friendly equitable and inclusive environment for the students with disabilities in the campus.
Introduction In recent years, with the rapid development of society, the connections between social capital and health and well-being have become increasingly intertwined in this developing world, with the two being closely related and influencing each other. Consequently, many studies have begun to turn their field of inquiry to the relationship between social capital and health and well-being in developing countries, investigating a wide range of aspects of this . As a representative developing country, China is selected as the research context and the target population of many relevant studies has been set lately. Due to rapid demographic and social growth, parental and community expectations of children have led to growing competitive pressures among adolescents, which has often led to neglect and lack of attention to vision health, further leading to a massive increase in the number of adolescents suffering from myopia and a decrease in the age of children suffering from this condition. --- Target population Upon review of a number of studies, it was shown that the sampling method for investigating the relationship between social capital and another dependent variable is through purposive sampling , where the researcher defines multiple definitions of the target population and purposefully identifies the appropriate respondents for the study. Three of these papers have focused on China, one of which was as diverse as possible in terms of age, while the other two quantitatively have focused on a particular age group, namely teenagers and older adults . However, in this paper, the target population is considered in a different way because the researcher intends to focus on primary school students in rural China, the younger age groups where variables are particularly difficult to control to find a congenial method to obtain reliable data. --- Social capital In addition to the choice of the study population and its sampling strategy, the methods and theories used in these studies are also interesting and worthwhile learning from. First and foremost, their respective definitions and the conceptualization of social capital are broadly divided into two categories, structural and cognitive, with some individual studies separating the two and measuring them simultaneously . In the case of structure, many studies have focused on the interpersonal network between the participants and his or her neighbourhood and the link between the two, and the social group or system to which the participants belong. For cognitive, it looks at the trust that others have in the research participant. Moreover, the other perspective on the conceptualization of social capital is to look at both neighbourhood individual levels . Both theoretical approaches have unique benefits. The structural and cognitive aspects can be judged in such a way as to derive how community and outside influence the individual and how the links between the two lead to changes in the individual's social capital. The individual and neighbourhood type of the instrument is used to assess social capital at the individual level, while social capital in the neighbourhood can be a good way of seeing the impact of the individual on the outside world or on the social capital of others. --- Perception of vision health According to a review of the paper written by Gong in 2022, the areas that the research has focused on to see parents' perceptions of myopia can be broadly classified into three categories. The first is parental attitudes towards outdoor activities, which can directly affect the likelihood of children going outside to relax, and thus further affect their vision problems. Then there is the importance parents attribute to their children's studies. If parents place a high value on studies, then they tend to neglect their children's eye health. Finally, the parents' education level would also tell their perception of myopia. Parents who receive a higher level of education may be able to be informed or aware of the importance of vision health, thus becoming more attentive to their children's vision health . Existing research investigating the effect of NS on individual health has focused on urban residents' mortality areas . However, studies that focused on rural neighbourhood contexts in China are still underdeveloped . There are even fewer studies that examine the effects of NS on vision health among adolescents. While most studies find that NS is positively correlated with individual health, it is unclear the role of other factors such as parents' perception of vision health and intervention, and the family's socioeconomic status. Ziersch and Aminzadeh have emphasised the important role of socioeconomic status. Zhou reported that both F.E. and N.E. are important factors. --- Vision health Besides social capital and perception of myopia, the dependent variable is also of considerable note in the measurement of well-being or more specifically, the measurement of vision health. Many of these studies have used self-report vision health scale instrument measures to define a person's well-being level, which is a reliable indicator of vision health. --- Hypothesis Based on the literature review, the researcher proposed several hypotheses. As previous studies have regularly concentrated on both social and individual capital but have rarely paid attention to neighbourhood social capital and individual social capital separately, the researcher decided to conduct separate studies to examine the implications of both for vision health. Therefore, the first and second hypotheses respectively are: H1 -In general, participants with higher social capital have better visual health H2 -Personal social capital has a greater impact on visual health than neighbourhood social capital. Furthermore, according to the findings of Gong , the role and influence of parents are crucial to the maintenance of vision health in school children. Both their general eye health habits and their values regarding eye health have a secondary consequence on their children's eye condition. Therefore, the third hypothesis set from this perspective is: H3 -Parents' perceptions of vision health issues are related to their children's vision health In this context, the third hypothesis refers to how family values may influence or subconsciously shape the behavioural habits of children, which are more or less related to the economic status of the family and the education level of the parents. Therefore, in order to further substantiate and confirm the relationship between these two aspects, the fourth and fifth hypotheses are: H4 -Higher family socio-economic status is associated with better visual health. H5 -Children with more educated parents have better visual health. Finally, after reviewing related works of literature, the association between gender and vision health is not mentioned so it is difficult to make a connection between the two superficially. As a result, a sixth hypothesis is set to confirm this and determine if this is not an area of concern by working through the data: H6 -There is no significant difference between the sexes in terms of children's visual health. --- Method The research site chosen for this study was a remote area of Qionghai, Hainan Province, China, where there is a relative paucity of awareness and social concern about vision health. Qionghai ranks sixth in GDP among the cities of Hainan province in 2022, with agriculture, forestry, livestock, and aquaculture as its main industries . There are a total number of 4 participants who are involved in the survey output data. The participants were selected through purposive sampling based on previously set conditions. This paper designs a questionnaire as the main instrument. To constitute the questionnaire, the researcher applied a thematic classification to deconstruct the key variables in the research questions into five sections. The first section on the demographic part of the questionnaire was designed using a set of multiple-choice questions and short answer questions. The social capital section is categorised into neighbourhood social capital and individual social capital, each measured in relation to the other. For this paper, the subject of neighbourhood social capital is measured with eight questions while the measure of individual social capital is based on six questions, both of which are scored on a fivepoint Likert scale. Finally, the measurement of family economic status draws upon scholar Yip's research and consists of four questions. Parental perceptions of visual health are conveyed through ten related questions on a brief answer and Likert scale, which is designed to reflect the issues and parental views. Based on the responses to the questions, parental attitudes and awareness towards vision health are evident and subconscious. As mentioned above, this survey adopted the self-report vision health scale instrument measures to define a person's vision health . The questionnaires were delivered through the application of the corporate social media software WeChat. Participants were grouped together in a unified WeChat group for the distribution of the questionnaires. The administration and collection of questionnaire data were carried out through the platform Questionnaire New. --- Results --- Table 1. Descriptive statistics of demographic variables --- Parent gender --- The general pattern in the demographic variables of the participants Among the participants surveyed, the proportion of female parents in the middle age group of 31 to 40 years old accounted for the entirety of the respondents, while the involvement of parents in the rest of the age group and of males is practically absent. This reflects the prevailing age of childbearing and the poor level of male engagement in matters relating to child care, and together with the statistics indicating that the participants were all middle school or technical secondary school educated, this highlights the economic backwardness of local women leaving school early and entering society early. In addition, the data also shows the type of building or house that the participant households resided in, with two-quarters of the participant households living on a flat floor and one-quarter each living in a private house or a single room. No participant is living in a high-rise building, which indicates the contemporary development of the area and reflects that agriculture, forestry, and fishing are the main sources of income. Further, statistics on children are available. In terms of the gender of the children, 50 per cent of the children are of each gender. The general distribution of the children's grades is fourth grade, which is 75 per cent, while the left 25 per cent of the children are in fifth grade. As measured by the survey, the total number of rows of seats in all the children's classes can be divided into two categories. A quarter of the classrooms had four rows of seats, while the remaining quarter had twice as many rows of seats, making a total of eight rows. Out of these three quarters, approximately 33.3 per cent of the children sit in the third row of the class. One hundred per cent of the children in the other quarter sat in the third row of the classroom. Regarding the question of whether direct family members had any past vision problems or myopia, seventy-five per cent of the families reported that this was not the case, while twenty-five per cent of the families replied in the affirmative. In terms of the economic status of the family, the calculation of each family's economic status out of 20 points showed that the family with the highest score was 15 and the family with the lowest score was 10. Besides, the average score for each family was 12.5 points. --- P-value Between groups 0.323 --- Correlation analysis The correlation between the family's economic status and its values regarding visual health and the importance it places on visual health prevention can be seen in the correlation coefficient calculated in the graph above. The correlation coefficient is 0.94, demonstrating that there is a positive relationship between the two. The overall trend is that the higher the economic level of the household, the more positive the values of the household in terms of visual health, and the more importance the household places on the prevention and protection of visual health. In addition, 0.94 is close to unity, so the two are still relatively correlated and can influence each other. All these support Hypothesis 1, which states that the better the family's financial situation, the more positive the perception of vision health. The absolute value of the correlation coefficient for individual social capital is greater than the absolute value of the correlation coefficient for neighbourhood social capital. Therefore, H2 is supported. However, the directions of the two correlation relationships are different. One has a Positive trend, the other has a Negative trend. This is probably due to the limited variance among the data. The correlation between FES and Perception of vision health prevention is significant. This supports the hypothesis that families with higher economic status tend to emphasize more on prevention of vision health problems. --- Gender difference in terms of Perception of vision health prevention It is not clear whether gender plays a significant role in influencing the parent's perception. After performing a single factor ANOVA analysis, no significant difference was found between parents of male students and those of female students. Report the p-value of the ANOVA. This means that children's gender does not play a significant role in shaping their parent's perception of vision health prevention. --- Digital device usage, length of studying and frequency, and length of outdoor activities The main outdoor activities of the children of the participants were diverse, with the majority types being basketball, rope skipping, running, shopping, badminton, and hide-and-seek. While the data can be concluded that in general the frequency and length of outdoor activities for these children are still low and restricted. This may further affect the vision health of the child due to its lack of eye relaxation. There are further reasons for the limited outdoor activities. Considering the use of electronic devices and the number of time that children spend studying, it is clear that children's eyesight health in this area is generally affected by the high priority parents place on studying. Most parents state that they choose to compromise eye relaxation time to give their children additional study time, and often prefer to let their children use electronic devices to facilitate relaxation rather than letting them go out and play. This suggests that children's eyesight is influenced by the extent to which parents place a premium on relaxation activities and the importance parents place on learning. --- Discussion In general, the results of the above observations and analyses imply that individual social capital, family economic status of parents, as well as the degree to which the perceived attention to vision health is neglected, have a direct impact on children's vision health. Despite the findings of the current study, there are limitations to this study. Due to the lack of coverage and the number of participants in the study, the results may not be as reliable as expected. Therefore, future research should cover a larger number of respondents in order to improve the reliability of the results. [Please also add a few sentences to show what findings in previous research have been confirmed by this study and what are some future directions scholars can take to further investigate the issue.] --- Conclusion In conclusion, it is clear from this investigation that the extent to which rural areas are unexplored or underdeveloped, the economic status of families, and the individual social capital of parents do have a greater impact on a child's eye health than the social capital of the community. Parents' perceptions of things such as schooling, which are shaped by their education, exposure, and past experiences, also have a prolonged impact on their children's eye health. However, neither the gender of the parent nor that of the child has any correlation with the child's vision health. Overall, this study has helped to shed light on the problems associated with the lack of educational resources and lagging access to information in rural areas, raising awareness of the problems and the reasons behind their occurrence and helping to inform further research to address the root causes.
In a digital society, students' visual health has been heatedly discussed in recent decades by academia and the public, as the age at which people use electronic devices for the first time is getting younger. This study was conducted to examine the implications of social capital, economic status, and parental perception on the vision health of elementary school-aged children in Qionghai, Hainan, one of the rural areas in China. The researcher used the purposive sampling method to facilitate target selection. Then a questionnaire that consisted of Likert scales, multiple choice questions, and short answer questions was designed to help approach data collection.
Background Diet healthiness is socially patterned such that the most deprived in the population tend to eat less healthy diets with fewer fruit and vegetables [1][2][3][4]. This contributes to the substantial socioeconomic inequalities in life expectancy and years lived in good health [5]. Population approaches that tend to rely less on conscious behavioural responses than individual-level interventions have been suggested to be less likely to increase Open Access *Correspondence: rachel.pechey@phc.ox.ac.uk 1 Nuffield Department of Primary Care Health Sciences, Radcliffe Observatory Quarter, University of Oxford, Oxford OX2 6GG, UK Full list of author information is available at the end of the article health inequalities [6][7][8]. These include micro-environmental interventions, which are often characterised as relying largely on non-conscious processes [7,9]. The extent to which this may hold for particular interventions is unclear. Availability interventions involve altering the number of instances of a product within the physical micro-environment. These interventions represent a paradigmatic example of micro-environmental interventions that have shown promising evidence of effectiveness [10,11]. The mechanism by which these interventions operate is not fully known. However, if such interventions work due to increased visual attention and/or salience being given to products with increased availability , this could lead to equal effectiveness by socioeconomic position [12]. Alternatively, if availability acts through individuals' identifying and selecting their most-preferred option, targeting availability could widen health inequalities, given evidence of pre-existing social patterning in food preferences [13]. The intervention has the potential to further exacerbate the differences between groups given behavioural experiments suggesting that poverty can deplete cognitive resources [14], with cognitive depletion making it harder to resist less-healthy options [15]. To date there is relatively little empirical evidence on the relative effectiveness of availability interventions by SEP, although studies suggest an impact in all SEP groups [16][17][18]. Some primary research is consistent with responses to availability interventions potentially being stronger for those of higher SEP [17], which could lead to increased inequalities, but other studies find no evidence for a moderating effect of SEP on availability or labelling interventions [19]. This may however reflect a lack of statistical power, given the larger sample size required to detect interaction effects than main effects [20,21]. Systematic reviews in this area have been limited in their ability to assess the impact of these moderating factors in meta-analyses largely due to the small number of studies available that report such information . In two recent systematic reviews focused more widely on dietary nudges, one suggested some, but not all, interventions had the potential to increase health inequality [22], while the other found weak evidence that these may be more effective in those with lower SEP [23]. However, conclusions could be influenced by the type of interventions that are predominant in these reviews, such as nutrition labels or logos, given the hypothesis that cognitively-oriented interventions may lead to less equitable outcomes than non-information based interventions. Indeed in both reviews of dietary nudges over half were cognitively-oriented interventions. In contrast, a review of the inequalities arising from different types of healthy eating interventions concluded that none of the identified studies targeting environmental changes in specific settings were likely to lead to differential impact by SEP -in contrast to information-based interventions which tended to differentially improve diets of individuals with higher SEP [24]. We have completed a series of studies examining the effect of altering the availability of healthier vs. lesshealthy options [16][17][18]25]. These have used similar methods allowing the results to be combined in an individual participant mega-analysis. This provides a more powerful test of these potential moderators than allowed by single studies, while access to individual level data allows the use of control variables in a manner not possible with aggregated meta-analyses [26]. A better understanding of whether this non-information based intervention could lead to intervention-generated inequalities may also provide insights relevant to understanding the likely impacts of other micro-environmental interventions. Accordingly, the current study aimed to evaluate whether the impact of altering the availability of healthier vs. less-healthy options on healthier option selection differed by socioeconomic position. Given different measures that tap into the construct of socioeconomic position may encompass distinct elements underlying the relational nature of socioeconomic position [27], this was investigated separately for different indicators. --- Methods --- Data Six relevant studies conducted by our research team, four conducted online and two in laboratory settings were included [16][17][18]25]. To identify other studies that could contribute data to these analyses, we screened 30 studies identified as potentially eligible as part of the search strategy that was run to update a Cochrane review of availability interventions in June 2021 [10]. No other studies were identified with experimental designs allowing assessment of availability as a single-component with selection of a healthier option as a dichotomous outcome variable , and that collected data on the SEP of participants. Characteristics of the six included studies are shown in Table 1. In each case included studies altered the availability of healthier vs. less-healthy foods, and looked at effects on the selection of a healthier option . Seven of the nine comparisons showed a significant main effect of availability, although all were in the expected direction. Of the four studies that assessed impact by SEP, Snacks: Lower-energy vs. higher-energy ; Drinks: Lower sugar ; higher sugar ; Meals: Lower-energy ; higher-energy Quotas by age and gender were designed to be representative of the UK population; quotas by SEP recruited equal numbers of higher vs. lower SEP participants E Equal number of healthier vs. less-healthy options, MH More healthier than less healthy options, MLH More less-healthy than healthy options a Availability types as defined in a conceptual review of availability interventions [12]; Absolute Availability ; Relative Availability ; Absolute and Relative Availability . This was determined for the key comparisons made within the papers, i.e. for Pechey & Marteau and Pechey, Sexton, Codling & Marteau , this was the comparison between the 'Equal' condition and the other availability conditions b "Emptier" trials were excluded from the megaanalysis; For Pechey, Clarke, Pechey, Ventsel, Hollands, & Marteau [25]: the interaction term was statistically significant just for one . Information on education was collected in all six included studies, and income in all but one study . Other potential indicators of SEP -Index of Multiple Deprivation and occupational status -were each only collected in two of the six studies . Analyses therefore focused on the variables of education and income. All the studies involved selection rather than purchasing. The number and type of options were matched in one set of studies to those faced by customers in cafeteria settings , offering a limited number of main meal options [18]. Another involved larger numbers of products, showing drinks and snacks displayed on shelves in a canteen . The remainder -including snack selections made in laboratory studies were not designed to mimic a purchasing context. The raw data were pooled across the six studies, given their similar methodology. Inclusion in the mega-analysis dataset was limited to trials of full shelves or trays . In total, 21,360 observations from 7,375 participants were analysed . Table 2 shows participant characteristics. Three Availability conditions were investigated: predominantly healthier, predominantly less-healthy, and equal healthier and less-healthy. Within the predominantly healthier availability condition the available options overwhelmingly comprised 75% healthier options . Similarly, nearly all the observations for predominantly less-healthy trials comprised 25% healthier options . There were fewer observations for ranges with equal numbers of healthier and less healthy options available, which were only included in three studies . Of the observations, only 485 were laboratorybased , the remainder from online studies. There were no field trials. The number of products in the range offered varied between 4-64 , with 50% of observations offering 4 options. The product range was kept the same following the availability intervention for 43.9% of observations. In terms of food type 60.5% of observations were for snacks; 28.6% were meals, and 10.9% were drinks. --- Analysis Multilevel logistic regressions were used to analyse the impact of altering the availability of healthier options on item selection across SEP. Models included three levels, with observations nested within individuals, which were nested within studies -to adjust for the repeated measures designs used in three of studies , and for the potential influence of aspects of individual study design on the behavioural outcome. The primary outcome was a dichotomous variable indicating the selection of a healthier option. The key analyses investigated interactions between the availability variables and socioeconomic position. Inference criteria were set at p < 0.01 . --- Availability For the primary analysis, availability conditions were modelled using dummy variables, with less-healthy as the reference group. Primary analyses only included 'healthier' vs. 'less healthy' availability conditions, as these appeared in all studies. The 'equal' vs. 'less-healthy' trials and 'healthier' trials were compared in secondary analyses, with a dummy variable indicating type of availability manipulation . --- Socioeconomic position Two analyses were run, looking at different indicators of socioeconomic position - highest educational qualification and annual household income. Covariates included in models were: whether the study took place in a laboratory setting ; whether the product range was kept the same following the availability intervention; the number of products available; the type of food available ; participant age and gender. --- Alterations to planned analysis The analysis plan was pre-registered on the Open Science Framework . Analyses looking at the impact of interventions by BMI will be reported elsewhere. Due to issues with model convergence, observations for participants who reported their gender as 'Other' were removed from models due to the very small numbers of observations. In addition, food type groupings were re-categorised to be "Meals" vs. "Snacks/Drinks", to avoid multicollinearity between multiple study-level variables. The covariate 'hunger' was included in models in several of the original studies, but was not included in the current primary analyses as this was not collected in one of the studies . For the secondary analyses looking at the 'Equal' condition, the variables for whether the product range was kept the same or changed and for whether the study was laboratory vs. online were removed to again avoid multicollinearity . --- Results --- Main effects of availability condition and socioeconomic position on selection of a healthier option Availability condition Compared to selections when the range offered was predominantly less-healthy, participants had over threefold higher odds of selecting a healthier option when the available range was predominantly healthier : 3.82; 95%CIs: 3.54, 4.12). --- Education Compared to the most educated participants , less educated groups had lower odds of selecting healthier options . --- Income In the model examining main effects only, there was no evidence of differences in the likelihood of selecting healthier options between participants with the highest household incomes , compared to those with lower incomes . --- Moderation of the impact of availability condition on selection of a healthier option by socioeconomic position Education Figure 1 shows little difference by education when predominantly less-healthy options were available, but a greater proportion of participants with degree-level education may be more likely to select healthier options when predominantly healthier options are available. When a greater proportion of less-healthy options were available in studies, analyses suggested no evidence of differences in the likelihood of selecting a healthier option at p < 0.01 between education levels ; 5 + GCSEs up to 1 A-level or equivalent ; or 2 + A-levels but no degree or equivalent ). The interaction terms suggest that when availability changes to having a greater proportion of healthier options those with 2 + A-levels but no degree or equivalent are affected less than those with degree level education . Whilst the other education groups showed a similar direction of effect, these were not significant at p < 0.01 ; 5 + GCSEs up to 1 A-level or equivalent: OR: 0.85; 95%CIs: 0.70, 1.02; p = 0.088). --- Income Interaction analyses suggested no evidence of any interaction effects between income and availability . --- Secondary analyses Analyses that also included trials where an equal number of healthier and less-healthy options were offered were conducted, with the 'equal' condition as the reference group. These showed a consistent pattern of results to the primary analyses . Notably, however, these indicate that the difference between the highest educated group and less educated groups was evident only when healthier options were predominant. --- Discussion The results from this mega-analysis of online and laboratory studies show that over 50% of selections involved a healthier option when the available range was predominantly healthier, compared to around a quarter of selections when the range offered was predominantly less-healthy. Moreover, they suggest that differences related to SEP are limited, with minor differences only observed in relation to education in conditions where healthier options were dominant. For income, there was no evidence of any difference in likelihood of healthier option selection, nor of any differential responding to availability interventions. This study benefitted from a large sample size due to combining studies, providing more power to test subtle interaction effects that a single study may not be able to identify. Relatedly, another strength was the consistency of both the sets of variables collected and the core elements of study design, allowing a more nuanced investigation of moderating variables -with four levels included for each of the socioeconomic indicators. This consistency is in part due to the studies all being conducted by one research group, however, this could also introduce bias. Replicating these effects using data from other research groups would increase confidence in findings. In addition, limited variation means some elements of Fig. 1 Marginal means for the proportion predicted to select a healthier option, by availability condition and highest educational qualification study design could not be explored , or whether relative or absolute availability was altered). Indeed, there were a relatively small number of products available in these studies; if absolute availability has a differential impact by SEP compared to relative availability, then these results may differ in contexts where a greater number of products are available. Future studies exploring the effects of availability interventions and how these vary with the number of products available would be beneficial, particularly as increasing options may increase cognitive load, which has the potential to reduce effects in lower SEP groups. The key limitation of this mega-analysis is that the included studies comprised online and laboratory studies, with no field studies which might better reflect 'real-world' responses. Only two studies included real product selections that participants could immediately consume -both predictably with much smaller sample sizes -so most observations came from online studies with images of products being selected. Social desirability bias could be exacerbated in these contexts, where the consequences of selecting a non-preferred option are minimal. Even in the laboratory studies, these products were offered for free, so may not reflect selections that would be made in a food purchasing context. Moreover, given that diets are made up of a considerable number of such choices, effects are likely to be substantially smaller in experimental studies than studies of dietary patterns. This is possibly reflected in the results for income, where the lack of patterning in healthier food selections may seem surprising, given previous studies have suggested a relationship between income and diet [3,4], but are consistent with studies of one-off food choices, which have often shown no or little evidence of socioeconomic patterning in selections [16,25]. The increases in healthier option availability led to increased healthier option selection in all socioeconomic groups, matching the results across each of the online and laboratory studies that contributed data to the mega-analysis. There was, however, some evidence suggesting a minor increase in responsiveness in the most educated, in particular when the majority of options were healthier. This equated to a 31 percentage point increase in selecting healthier options for degree-level participants, compared to a 27 percentage point change for the lowest educated group, i.e. a 4 percentage point difference in the context of a 50 percentage point change in relative availability. This is in line with previous suggestions that predominantly healthier options being available may lead to more disparity by education , although these analyses lacked power and were not conclusive [17]. Given initial evidence that both preferences and social norms may act as mechanisms underlying the impact of interventions targeting healthier food availability [12,25], such an effect could be due to those with higher SEP being more likely to prefer healthier options [13], which may also play into, or act alongside, existing Fig. 2 Marginal means for the proportion predicted to select a healthier option, by availability condition and annual household income social norms within groups. As yet, however, there is relatively limited evidence to support the presence of differences by socioeconomic position for relative preferences for healthier options or social norms with regard to their consumption. A different pattern of results was found for income. This reflects the results of the review by McGill and colleagues [24], in which the two environment-targeting studies that used income as a measure of SEP found no evidence of differential impact by SEP. . Studies of dietary surveys have found that different measures of socioeconomic position may have independent effects -e.g. showing stronger associations with different food groups or nutritional outcomes -suggesting their additive impacts contribute to lower SEP groups having less-healthy dietary patterns [2][3][4]. While income could indicate the material resources available to purchase foods -which is less relevant to studies in this megaanalysis where no payments were made -education may be indicative of skills and knowledge to avoid harmful behaviours [3,4]. Moreover, behavioural experiments suggest that poverty can deplete cognitive resources [14], which may underpin interactions between education and income. Further studies examining moderation by income in contexts where payments are needed would be beneficial. These different facets of SEP can also impact on an individual's health-related behaviour and subsequent health outcomes in somewhat distinct ways [28]. For example, it has been proposed that lower education may relate more strongly to an individual's increased likelihood of developing a health issue, while lower income may relate more strongly to subsequent harmful progression of illness [29]. As such, the differential patterning between income and education variables in these analyses could reflect their separate contributions to socioeconomic position. Further exploration of possible mechanisms that could drive any moderation by socioeconomic position would help to determine how best to utilise availability interventions. If factors such as preferences and social norms play a substantial role, one approach might be to take a stepwise approach to changing availability in contexts where these factors are expected to favour less-healthy options, making smaller changes and allowing time to see if preferences and social norms change in response. Indeed, if changing availability changes social norms, then these interventions may have a wider influence on both diets and minimising inequalities, beyond their direct impact. The potential for intervention-generated inequalities needs to be considered in the wider context of existing inequalities in food environments, and keeping in mind that this intervention benefitted all SEP groups. Comparing effects by SEP identified in this study assumes that exposure to such scenarios would be equally distributed by SEP, which may not be the case, for example, given those who live in the least affluent areas are most exposed to fast food outlets [30]. Moreover, in retail settings where less-healthy options predominate [31,32], switching to a more equal distribution of healthier to less-healthy options would not be expected to have any impact on inequalities in food selection by education based on the findings of the current study. --- Conclusion These analyses suggest that availability interventions can be implemented with minor or no likely adverse impact on health inequalities, particularly when people are selecting food from ranges that are predominantly lessheathy. These interventions show substantial impact on healthier option selection across socioeconomic position, so offer a promising route to increasing diet healthiness across the population. --- Availability type a Availability conditions Food target Impact of availability on healthier option selection Differential impact of availability by SEP Pechey & Marteau [16] Availability type a Availability conditions Food target Impact of availability on healthier option selection Differential impact of availability by SEP Pechey, Sexton, Codling & Marteau [17] Availability of data and materials No datasets were generated during the current study. Data analysed is available from the Open Science Framework: Pechey & Marteau : https:// --- Abbreviations A-level: Advanced-level qualification; GCSE: General Certificate of Secondary Education; OR: Odds ratio; SEP: Socioeconomic position. --- --- --- --- --- Competing interests The authors declare that they have no competing interests. ---
Background: Availability interventions have been hypothesised to make limited demands on conscious processes and, as a result, to be less likely to generate health inequalities than cognitively-oriented interventions. Here we synthesise existing evidence to examine whether the impact of altering the availability of healthier vs. less-healthy options differs by socioeconomic position. Methods: Individual-level data (21,360 observations from 7,375 participants) from six studies (conducted online (n = 4) and in laboratories (n = 2)) were pooled for mega-analysis. Multilevel logistic regressions analysed the impact of altering the availability of healthier options on selection of a healthier (rather than a less-healthy) option by socioeconomic position, assessed by (a) education and (b) income. Results: Participants had over threefold higher odds of selecting a healthier option when the available range was predominantly healthier compared to selections when the range offered was predominantly less-healthy (odds ratio (OR): 3.8; 95%CIs: 3.5, 4.1). Less educated participants were less likely to select healthier options in each availability condition (ORs: 0.75-0.85; all p < 0.005), but there was no evidence of differences in healthier option selection by income. Compared to selections when the range offered was predominantly less-healthy, when predominantly healthier options were available there was a 31% increase in selecting healthier options for the most educated group vs 27% for the least educated. This modest degree of increased responsiveness in the most educated group appeared only to occur when healthier options were predominant. There was no evidence of any differential response to the intervention by income.Increasing the proportion of healthier options available increases the selection of healthier options across socioeconomic positions. Availability interventions may have a slightly larger beneficial effect on those with the highest levels of education in settings when healthier options predominate.
Introduction Smoking behavior is a phenomenon that is often encountered in everyday life. This is characterized by the increasing number of smokers every year, especially among teenagers. Data shows that in 2022, the prevalence of teenage smokers under 18 years old will be 3.44% . During the period of identity search and development, teenagers are highly vulnerable to environmental influences , . The increasing trend of teenage smokers is a concern because it has long-term consequences, namely the negative effects of smoking on health that can harm oneself. Early studies on vocational high school students in Gowa Regency obtained data that various reasons were expressed by teenagers, including that smoking makes them more confident, because their parents smoke so they do the same thing, with the initial intention to try it out and eventually become addicted, as well as being influenced by their social environment. Information was also obtained from teachers that the school has made strict rules prohibiting smoking and giving severe punishments, but smoking behavior is still often found among students. Smoking is a growing problem that has not yet been solved in Indonesia . The implementation of strict policies and regulations regarding cigarettes should reduce smoking behavior among teenagers, but in reality, this is not the case and tends to be the opposite. Adolescent attitudes are greatly influenced by their self-concept . One form of positive self-concept is self-control . Someone who has self-control will be able to direct their behavior towards positive consequences, while individuals with low self-control tend to pursue pleasure in any way possible without considering the long-term effects . On the other hand, the family is the main environment that has a lot of influence on various aspects of adolescent development . The family, through the parenting style of parents, plays a role in monitoring children's behavior, including any smoking behavior that may occur . There are four parenting styles that are related to different aspects of adolescent behavior, one of which is the authoritative parenting style . Parents who apply this style train their children to be responsible and determine their own behavior in order to be disciplined . It provides freedom for their children while also providing control so that they can be independent and responsible for themselves . One form of human behavior that can be observed is smoking. Smoking behavior is harmful to health, both for the smokers themselves and for others who happen to inhale the cigarette. This behavior is very harmful to health, but there are still many people who engage in this habit. It is seen from various perspectives as very detrimental, both to oneself and to the people around them. One form of positive self-concept is self-control. Self-control should be a concern in a person's behavior, including teenagers. Self-control can be defined as the activity of controlling behavior . Controlling behavior means considering things first before deciding to act. Individuals with high selfcontrol can restrain themselves from dangerous things and consider long-term consequences. In other words, individuals with high self-control can respond well to received stimuli and make good decisions, so the decisions they make do not harm themselves in the future . Meanwhile, individuals with low self-control do not base their actions on goals, so they are more oriented towards pleasurable things that are negative . Another thing that should be considered in adolescent behavior is authoritative parenting style. It can be defined as an authoritative parenting style where children are given freedom but parents can still set limits . Parents provide opportunities for children to have dialogue, and warmth is the main characteristic of this parenting style. Children raised with this parenting style will have responsibility and independence. Parents try to involve their teenage children in everything related to teenagers themselves . Authoritative parents emphasize the importance of rules and norms, but they are willing to listen, explain, and negotiate with their children. Therefore, children raised with this parenting style will have responsibility and independence. --- Method This research uses a survey method with a cross-sectional study approach according to the research objectives, which are to reveal and understand the relationship between self-control, authoritative parenting, and smoking behavior among vocational high school students in Gowa Regency. The research design is presented in the following image: --- Design Research This research will be conducted at a state vocational high school in Gowa Regency and will last for 6 months . The population in this study is all male teenagers who are students at state vocational high schools in Gowa Regency, totaling 3,567 male students. To obtain a representative sample, the researcher determines the sample size using the Slovin formula with e = 10% based on the following formula: 𝑛𝑛 = 𝑁𝑁 1+𝑁𝑁.𝑒𝑒 2 = 3.567 1+3.567 . 2 = 97,27 rounded to 97, sample collection will use purposive random sampling technique to obtain research data. The variables measured include: a) The independent variable , which is self-control, is measured through indicators: 1) behavior control, 2) cognitive control, and 3) decision control. b) The independent variable , which is authoritative parenting, is measured through indicators: 1) family trust, 2) attention to the child, and 3) discussion for decision-making. c) The dependent variable , which is smoking behavior, is measured through indicators: 1) environment, 2) perception, and 3) parental factors. Then, for the research data collection instrument, a questionnaire is used to obtain quantitative data. This technique uses a Likert scale, which will then be applied to the criteria good/high if the respondent's score ≥ mean, and the criterion is less good/low if the respondent's score < mean. Furthermore, to obtain valid instruments for this study, the developed instruments will be processed through: a) Content validity testing by 3 experts using the Aiken V concept, with the criteria that an item is said to have content validity if the item has a content validity coefficient > 0.60. b) Empirical validity testing using confirmatory factor analysis . c) Reliability testing using the Cronbach's alpha formula. If the alpha value is > 0.7, it means that the instrument's reliability is sufficient. The output is: "As for statistical techniques in analyzing research data to achieve research objectives, namely non-parametric statistical analysis, namely chisquare analysis, this is intended to determine the relationship that occurs between variables studied using chi-square. Furthermore, testing is carried out if the calculated chi-square > chi-square table or if the sig-p value < alpha at a significance level of 0.05, then the hypothesis is accepted." --- Results and Discussion --- Results The data collection technique used is a Likert scale questionnaire with answer alternatives of very appropriate , appropriate , not appropriate , and very inappropriate , based on the thinking that an even number of classifications is used in order to ensure that the scale is properly developed. In the development of the instrument for each variable, the validity of the content is tested using the Gregory formula. The criteria for content validity testing are based on internal consistency coefficient values > 0.75. Smoking behavior 8 1 Based on the summary of the test results in the table above, it can be stated that, based on the assessment of experts, the constructed instrument has met content validity. Then, in testing construct validity, confirmatory factor analysis was used. CFA is used to determine construct validity using the maximum likelihood method. The criteria include a Kaiser Meyer Olkin measurement result > 0.50 and a Measure of Sampling Adequacy > 0.50. Bartlett's test obtained a significance value of 0.00 for further analysis, and then the anti-image correlation value > 0.50 was included in the factor analysis. The validity of the items is seen from the factor loading, which is ≥ 0.40. Construct validity testing for each instrument in this study was conducted using the SPSS program for Windows. The results of the testing are presented below. --- Validity of Content Test --- Validity test of the self-control instrument Instrumental self-control is measured through 3 indicators as observed variables, for a total of 9 items. Indicator 1 consists of 3 items; indicator 2 consists of 3 items; and indicator 3 consists of 3 items. The summary of the test results is presented in the following Table 2 --- . Table 2 Summary of the KMO Test and Bartlett Test The construct validity testing results of the self-control instrument provide information that, based on the trial results, the Bartlett's test yielded a significance value of 0.00, which is smaller than 0.05. Additionally, the KMO and MSA coefficients were found to be 0.85, which is greater than 0.50, indicating that the sample size is sufficient for factor analysis. Furthermore, the anti-image correlation values for nine items were found to have an MSA > 0.50, which can be included in determining the factors. Further analysis using the Maximum Likelihood method revealed that all nine items showed factor loadings of 0.4 based on their indicators. These 9 items were then extracted and rotated to assess the goodness of fit of the factor model, resulting in a chi-square value of 355.325 and a significance value of 0.00 < α . Based on the process, it is concluded that the self-control instrument consists of 9 statements that form a factor. The analysis results indicate that all valid items are found in 3 indicators as observed variables and have made a significant contribution to measuring the latent variable. --- Validity test of the authoritative parenting instrument construct The authoritative parenting instrument is measured through three indicators as observed variables, for a total of nine items. Indicator 1 consists of 3 items; indicator 2 consists of 3 items; and indicator 3 consists of 3 items. The summary of the test results is presented in the following Table 3. --- Table 3 Summary of KMO Test and Barlett Test The construct validity testing results of the authoritative parenting instrument indicate that the Bartlett's test yielded a significance value of 0.00, which is smaller than 0.05. Additionally, the KMO and MSA coefficients are 0.87, which is greater than 0.50, indicating that the sample size is sufficient for factor analysis. Furthermore, the anti-image correlation values for 9 items have an MSA > 0.50, which can be included in determining the factors. Using the maximum likelihood method, it is found that all 9 items have factor loadings ≥ 0.4 based on their indicators. These 9 items were then extracted and rotated to assess the fit of the factor model using goodness-of-fit tests. The obtained chi-square value is 295.590, with a significance value of 0.00 < α . The analysis results indicate that all valid items are found in 3 indicators as observed variables and have provided a significant contribution to measuring the latent variable. --- Validity test of smoking behavior instrument Behavioral smoking instruments are measured through 3 indicators as observed variables with a total of 8 items. Indicator 1 consists of 3 items, indicator 2 consists of 3 items, and indicator 3 consists of 2 items. The summary of the test results is presented in the following Table 4. --- Reliability test The reliability test on each instrument in this study uses the Cronbach alpha formula with the help of the SPSS 20 programme. The criteria for an instrument being eligible for reliability is if the coefficient value based on the calculation results is > 0.70 [27]. The following is a summary of the reliability test results for each instrument in this study. , it obtained a reliability coefficient of 0.96, authoritative parenting of 0.96, and smoking behavior of 0.66, making the three reliability values of the instrument greater than 0.70; thus, the three instruments have qualified for reliability. --- The relationship between self control and smoking behaviour The data analysis used to determine the relationship between self-control and smoking behavior is chi-square analysis with the help of the SPSS for Windows program. The following is a summary of the results of the chi-squared analysis. Table 6 Chi Square Analysis Results 6, the results obtained from the calculation showed that in the case of low self-control with low smoking behavior, there were 3 respondents ; in the case of low self-control with high smoking behavior, there were 90 respondents ; in the case of high self-control with low smoking behavior, there were 3 respondents ; and in the case of high self-control with high smoking behavior, there was 1 respondent . Based on the results of the analysis, it can be argued that there is a tendency where the lower the self-control, --- Based on Table --- Smoking Behavior Total Gema Wiralodra is licensed under a Creative Commons Attribution 4.0 International License the higher the smoking behavior, and the opposite where the higher the self-control, the lower the smoking behavior. Furthermore, to test the hypothesis for clarity or significance, we used the chisquare test. As for the hypothesis: H1: There is a link between self-control and the smoking behavior of Vocational High School Adolescents in Gowa Regency. Ho: There's no link between self-control and the smoking behavior of Vocational High School Adolescents teenagers in the Gowa Regency. Sig- P Low High Self-control Low 3 3, Based on the table above, the calculation sig P < sig α . Based on the calculations, it was revealed that Ho in this test was rejected and H1 accepted, so it was concluded that there was a link between self-control and smoking behavior among Vocational High School Adolescents in Gowa Regency. --- The relationship of authoritative parenting with smoking behavior The data analysis that used to determine the relationship between authoritative parenting and smoking behavior is a chi-square analysis with the help of the SPSS for Windows program. The following is a summary of the chi-squared analysis. Table 7 The Results of Chi Square Analysis Based on Table 7, the results showed that in the case of poor authoritative parenting with low smoking behavior of 0 respondents , in the case of poor authoritative parenting with high smoking behavior of 89 respondents , whereas in the case of good authoritative parenting with low smoking behavior of 6 respondents , then for a good authoritative parenting with high smoking behavior of 2 respondents . Based on the results of the analysis, it can be argued that there is a tendency if the authoritative parenting is inadequate, then the smoking behavior is high, as well as the opposite, where if the pattern of authoritative care is high, then the behavior is low. Furthermore, to test the hypothesis for clarity or significance, we used the chisquare test. As for the hypothesis: H1: There is a link between self-control and the smoking behavior of Vocational High School Adolescents in Gowa Regency. Ho: There's no link between self-control and the smoking behavior of Vocational High School Adolescents teenagers in the Gowa Regency. Based on the table, the calculation sig P < sig α . Based on the calculations, it was revealed that Ho in this test was rejected and H1 accepted, so it was concluded that there was a link between self-control and smoking behavior among Vocational High School Adolescents in Gowa Regency. --- Discussion In attempt to prevent and reduce adolescent smoking behavior, one of its strategies is to enhance self-control and authoritative parenting. Through this research, it can be argued that self-control and authoritative parenting have a significant relationship with smoking behavior preventive behavior in vocational high school adolescents in Gowa Regency. The hypothesis reveals that there is a link between self-control, authoritative parenting, and smoking behavior in adolescents is proven in this study. It can also be argued that the better the selfcontrol and authoritative parenting, the less smoking behavior will occur in adolescents. The relationship between self-control and adolescent smoking behavior can play an important role in understanding why some adolescents tend to engage in a smoking habit while others do not. Strong self-control can help teenagers delay the immediate gratification associated with smoking. They are able to consider the long-term consequences of smoking, such as health risks and addictions, and prioritize long-run benefits like good health and freedom from the smoking habit. Adolescence is a vulnerable period for someone to engage in deviant behavior such as smoking. The mental immaturity of a kid is intimately tied to the teen's decision to smoke. Teenagers are no longer considered to be children, but they are also not considered to be adults, therefore they frequently do not think about the consequences of their own actions . In addition, adolescence is a stage of transition during which youths become impulsive and susceptible to influence. In this context, instability and impressionability are associated with adolescents' malleable behaviour and their susceptibility to environmental influence. Teenagers are especially susceptible to issues and undesirable behaviour in these unsettling times since they are still insecure and their emotions, including smoking, have not fully developed . Teenagers' smoking behaviour can be explained by a variety of factors, including peer pressure, parental modelling, and copying parents . Parenting is a total interaction process between parents and children, including the process of caring for, protecting and teaching children . The parenting approach taken by parents will have a significant impact on how the child behaves in the future and on his or her ability to act in accordance with societal norms without hurting himself or others. This occurs as a result of the fact that children emulate their parents during the parenting process and also learn about the restrictions placed on them by their parents . Authoritative parenting can have a significant impact on adolescent smoking behavior. Authoritative parenting combines elements of high supervision with warm and understanding emotional support. Parents with authoritative parenting tend to provide strict supervision of their children's behavior, including smoking behavior. They set clear boundaries and rules related to smoking and actively monitor their children's activities. This strict supervision can reduce the likelihood of adolescents engaging in smoking behavior. Authoritative parents often have open communication with their children and provide warm emotional support. They listen to their teens' concerns and problems with empathy and provide the necessary support. This pattern of caring allows adolescents to feel supported and is more likely to prompt them to seek support from parents in the face of stress or the temptation to smoke. --- Conclusion Based on the results of the analysis and discussion that has been carried out, it can be concluded that there is a significant relationship between self-control and smoking behaviour of vocational high school adolescents in Gowa Regency, and there is a significant relationship between authoritative parenting and smoking behaviour of vocational high school adolescents in Gowa Regency. Where it appears that there is a tendency for respondents with democratic upbringing to have the most smoking behavior. Meanwhile, smoking behavior in teenagers triggers several diseases such as cardiovascular disease, respiratory tract neoplasms , increased blood pressure and shortened lifespan. Smoking behavior among students needs to be prevented, and discussions regarding parenting patterns regarding smoking behavior need to be carried out. Therefore, information about the dangers of smoking behavior is very necessary in parenting patterns for students who have smoking behavior, apart from that, discussions with parents regarding the risks of smoking behavior need to be carried out.
A hostile sociocultural environment is one of the risk factors for adolescents engaging in unhealthy behaviors, one of which is smoking. Smoking behavior is a growing problem that has not yet been solved in Indonesia. The implementation of various policies and strict regulations regarding cigarettes should reduce smoking behavior among adolescents, but the reality is not so and tends to be the opposite. Adolescents ' attitudes are greatly influenced by their self-concept, and one form of positive self-concept is self-control. On the other hand, families with an authoritative parenting style are the primary environment that influences various aspects of a person's development, including the development of teenagers. The purpose of this study is to determine the relationship between self-control and smoking behavior among vocational high school students in Gowa Regency and the relationship between authoritative parenting style and smoking behavior among vocational high school students in Gowa Regency. The research design used a cross-sectional study approach with a purposive random sampling method. The results of the study show that there is a significant relationship between self-control and smoking behavior and a significant relationship between authoritative parenting style and smoking behavior among vocational high school students in Gowa Regency.
Introduction Severe mental illness is broadly defined as a group of mental disorders that are characterised by their persistence and their extensive impact on a person's life. This group includes schizophrenia spectrum disorders, bipolar disorder, and severe depression with psychotic features. People with SMI die on average 10-20 years earlier than the general population. Obesity and its comorbidities are common in people with SMI and are estimated to contribute to one third of the excess mortality. People with SMI tend to consume a diet low in fruits and vegetables and with more high-calorie convenience foods and sugar-sweetened beverages. This is in part due to the increased hunger caused by antipsychotics acting on various receptors. People with SMI also tend to be more sedentary due to negative symptoms and lack of motivation associated with their mental condition. The World Health Organisation defines overweight and obesity as 'abnormal or excessive fat accumulation that presents a risk to health'. Obesity is associated with a range of physical health problems including dyslipidaemia, type 2 diabetes, hypertension, cardiovascular disease, and some cancers. There are also psychosocial sequalae which lead to further disadvantage including lack of self-esteem and motivation, discrimination in several settings including education and employment, and a reduced quality of life. The global epidemic of obesity is particularly affecting people from low-and middle-income countries where there have been rapid changes in diet and lifestyle. There has been a nutrition transition away from traditional diets consisting of non-processed foods and pulses towards more energy-dense foods with added sugars and fats. Furthermore, there has been a decline in physical activity due to more sedentary jobs, and the increasing use of motorised transport. In addition to the high prevalence of obesity, more than 80 % of people with mental illness live in LMICs. The overlap between obesity and mental illness is therefore likely to be significant in South Asia where mental health service provision continues to be scarce. There is also an increased prevalence of diabetes and other cardiovascular risk factors seen at lower BMI values in the Asian population, as well as higher body fat found at lower BMI values. WHO guidance has, therefore, suggested the use of lower cutoff values for the Asian population based on such risk factors and comorbidities. Despite the increasing prevalence of SMI and obesity in LMICs, there is scarce evidence examining the scale of this comorbidity in the most affected areas; only 20 % of studies related to obesity in people with SMI have been conducted in LMICs. The disproportionate representation of higherincome settings means that evidence-based strategies may not take into account the societal and cultural contexts specific to LMICs. There is an urgent need to understand the prevalence of overweight and obesity in people with SMI in LMICs to guide practice and policy and aid in the development and adaptation of targeted interventions. Determining what association there may be between specific health problems and health-risk behaviours with obesity and overweight will identify those most at risk. This study aims to determine the prevalence of obesity and overweight in adults with SMI in Bangladesh, India, and Pakistan and investigate the association of obesity and overweight with sociodemographic variables, physical health conditions, and health-risk behaviours. --- Methods --- Study design This is a study based on a cross-sectional survey that has taken place across mental health institutes in Bangladesh, India, and Pakistan, looking at the physical health of people with SMI, as part of the IMPACT programme. --- Setting The cross-sectional survey took place across three national specialist mental health institutions: the National Institute of Mental Health and Hospital in Dhaka, Bangladesh; the National Institute of Mental Health and Neurosciences in Bengaluru, India; the Institute of Psychiatry Rawalpindi Medical University, Pakistan. Although they are tertiary care units, the general lack of mental health care provision for SMI at the primary and secondary care level means they serve the general population of people with SMI from across each country. --- --- Data collection Face-to-face interviews were carried out to collect information about mental and physical health, risk factors, and health-risk behaviours. The survey was translated into the most common local languages spoken in each country . --- Dependent variables and measurement BMI categories. To calculate BMI, we measured the height and weight of all participants in accordance with the WHO guidelines. Height was measured to a precision of 0•1 cm using a portable height measuring board with participants removing footwear or headgear. Weight was measured in kilograms using a portable weighing scale, with participants in light clothing and no footwear. Both height and weight measurements were taken twice, and the average was used for analysis. BMI was calculated /height ²) and BMI categories were assigned using both international and Asian thresholds which have lower cut-offs . Abdominal obesity. To determine the abdominal obesity, waist circumference was measured in duplicate to a precision of 0•1 cm using flexible fibreglass tape at the end of normal expiration, between the lower margin of the last palpable rib and the top of the iliac crest. Ethnicity-specific cut-off values for waist circumference have been recommended for the Asian population by International Diabetes Federation : ≥90 cm for men and ≥80 cm for women. --- Independent variables and measurement Several factors associated with obesity were investigated using the WHO STEPS instrument version 3.2, including physical comorbidities, health-risk behaviours, and sociodemographic variables. Physical comorbidities. Hypertension was defined by blood pressure exceeding the cut-off when measured during the survey, or those who reported diagnosis from a healthcare professional. BP was measured according to the WHO guidelines, using an automated BP monitor . Type 2 diabetes was defined by the HbA 1c measurement ≥6•5 % and those who self-reported. We also defined pre-diabetes according to the American Diabetes Association, as HbA 1c between 5•7 to 6•4 % . High cholesterol was defined as a low-denisty lipoprotein concentration ≥1 g/l according to their serum test during the survey and those who self-reported. High triglycerides were defined solely based on serum blood tests . All blood collection was carried out in accordance with the WHO STEPS surveillance manual. Health-risk behaviours. Variables were based upon whether the participants followed WHO recommendations for physical activity and fruit and vegetable intake. Self-reported current smoking status was also recorded. Sociodemographic and clinical variables. The variables included SMI diagnosis, SMI duration , antipsychotic use, clinical setting , and sociodemographic variables; age, sex, highest level of education, work status, and income tertiles. --- Sample size A sample size of 865 was originally calculated to be able to estimate the prevalence of diabetes with a precision of 2 % as an example of survey precision. However, this sample size is also sufficient to estimate the prevalence of obesity with a precision <2 %, considering a prevalence estimate of 10 %. --- Statistical analysis This study was reported according to STROBE guidelines. All statistical analyses were carried out using Stata v.17. Statistical significance was assessed at the 5 % level. Participant characteristics were summarised descriptively for each country and overall. Continuous variables were reported as means and standard deviations , and categorical variables were reported as frequencies and percentages. For each country separately and overall, the prevalence of underweight, normal weight, overweight, and obesity was reported using both the WHO international and the Asian BMI cut-off values. The prevalence of normal weight, overweight and obesity, using Asian cut-off values, was stratified by key characteristics , reported by country and overall. To investigate the associations between BMI categories and other comorbidities, multinomial logistic regression models were fitted with BMI group as the dependent variable. In the interests of accurately quantifying the disease burden of obesity and ensuring it is clinically relevant for the target population, Asian cut-off values were used. Individuals classified as underweight, according to BMI category, were excluded from this analysis as the examination of the association of being underweight with associated risk factors was not an objective of this study. Physical health comorbidities , health-risk behaviours , and sociodemographic variables were included as independent variables. Interactions between the independent variables and country were assessed using a likelihood ratio test to compare these to the model with no interaction terms. Relative risk ratios were reported along with corresponding 95 % confidence intervals and P-values. Unadjusted estimates are reported in the appendix. The associations between abdominal obesity and its determinants were investigated. Logistic regression models were fitted with abdominal obesity as the dichotomous dependent variable. The same variables as in the analysis of BMI categories were included as independent variables. Odds ratios were reported along with corresponding 95 % confidence intervals and P-values. Unadjusted estimates are reported in the appendix. Analysis models included complete cases only; however, multiple imputation was performed as a sensitivity analysis, using chained equations to impute missing data. The results were not changed by the MI analysis. Results of the MI analyses are presented in the appendix alongside the complete-case analysis for comparison. --- Results From the 3989 participants included in the study, 3126 participants were included in the multinomial logistic regression , and 3389 participants were included in the binomial logistic regression . The excluded participants for each analysis, and the reasons for this, are detailed in the flowchart found in the appendix. --- Participant characteristics The characteristics of study participants are presented in Table 1. A total of 59•1 % of the participants were male. The average age was 35•8 years . Schizophrenia-type disorder was the most common type of SMI with 44•7 % of participants having this diagnosis. The majority of the sample were outpatients , and the lowest income group was the most common . --- Prevalence of obesity and overweight The overall prevalence of obesity across the three countries was 16•0 % according to the WHO international BMI cut-offs and 46•2 % according to the WHO Asian cut-offs. The overall prevalence of overweight across the three countries was 30•2 % according to WHO international BMI cut-offs and 17•3 % according to Asian cut-offs. The overall prevalence of abdominal obesity was 53•8 %; however, differences were observed between sexes and countries. Abdominal obesity was less prevalent in men from Bangladesh compared to India and Pakistan despite relatively similar prevalence found amongst the female participants in each country. --- Overall and stratified prevalence of BMI categories According to the Asian cut-offs, the largest proportion of participants were classified as having obesity, compared to the international cut-offs, where the largest proportion were classed as having normal weight . --- Association of predictors of overweight and obesity The multinomial logistic regression analysis for the association of predictors of overweight and obesity are shown in Table 4a and Table 4b . The relative risk of having obesity is double in women compared with men . The percentage of participants with overweight was lower in females compared to males, whilst the percentage of participants with obesity was higher in females compared to males. The percentage of patients with overweight or obesity was higher in females than in males ); females 917/1266 ). Compared to 18-24-year-olds, the 40-54-year age group has the greatest relative risk of having obesity , and the relative risk of having The relative risk of having obesity in current smokers was lower than in nonsmokers, but the relative risk of being overweight did not differ by smoking status . Participants meeting WHO recommendations for physical activity had 21 % lower risk of having obesity than the less physically active group . In contrast, participants who met the WHO recommendations for fruit and vegetable intake had 2•53 times greater risk of having obesity compared to those not meeting the recommendations. Participants with pre-diabetes, type 2 diabetes, hypertension, high cholesterol and high triglycerides all had an increased relative risk of having obesity compared to normal weight . The largest relative risk ratio for obesity was seen in participants with hypertriglyceridaemia , and this was the only comorbidity for which the relative risk ratio of being overweight was also significant. In separate models, interaction terms between country and each variable were included. Only age group and high triglycerides were identified as having significant interactions with country. --- Association of determinants with abdominal obesity As seen in Table 5 the odds of having abdominal obesity were 3•79 higher in women compared with men, and the odds increased with increasing age. With longer SMI duration, there were greater odds of having abdominal obesity, and in those on antipsychotic medication, there was Additional models were fitted including an interaction term between country and each variable. When comparing to a model with no interactions, the likelihood ratio test identified variables sex , age group , level of education , work status , income , and high triglycerides as having a significant interaction effect with country. --- Discussion Obesity is a major public health problem in people with SMI regardless of whether international or Asian-specific thresholds for obesity are used, especially considering its association with other chronic conditions. The prevalence of obesity varied according to SMI and the sociodemographic characteristics of participants. The prevalence of obesity was considerably lower in Bangladesh than in India and Pakistan, which is mirrored in the general population. This may be related to lower income and education in the population in Bangladesh, as explained by other studies. This illustrates the complexity of the interplay between socioeconomic and physical determinants of obesity and how other factors like age may be more influential, as the mean age of people with SMI was lowest in Bangladesh. Although the psychiatric inpatient setting is considered obesogenic, participants in the outpatient setting were more likely to have obesity. Inpatients are more likely to have a refractory degree of SMI which leads to more severe symptoms like catatonia leading to malnutrition. This absence of physical activity can lead to a reduction in bone and muscle density which has been associated with underweight. The higher risk of obesity in women is in line with global trends and is likely driven by socio-cultural factors such as urbanisation where there has been a clear shift in LMICs from agricultural labour to wage labour which is usually more sedentary. The persistent disparity in male and female employment rates, however, shows that women are still more likely to be unemployed and usually in household roles which negatively affects the physical activity of women more than men. Clinical studies show that females on antipsychotic medication gain more weight than males. Furthermore, on a physiological level, women are more susceptible to weight gain due to their fat distribution and their neural responses to food-related stimuli are more positively correlated with BMI. Contrary to high income countries where the poorest are at higher risk of obesity due to poor diet and unhealthy lifestyle, we found the more affluent have an increased risk of obesity. Poorer people in LMICs tend to be engaged in more manual and physically demanding labour which causes increased energy expenditure. Also, our study found that those with a higher level of education are more at risk of being obese compared to no formal education, which may show that food literacy does not correlate with educational attainment. Research has shown that in more developed countries, education can offset the obesogenic effects of increased wealth; however, in LMICs, no interaction was seen between these factors, and both were in fact independent and positively correlated with BMI. This shows that in HICs the more affluent are more likely to purchase better quality and healthier foods, whereas in LMICs the wealthiest have more access to all foods and may gain more weight. This is supported by the finding that participants who met WHO recommendations of fruit and vegetable intake had more than double the risk of obesity compared to those not following this guidance. It is likely that those that can afford to buy and eat more than five fruits or vegetables a day are also the people that can afford more food which increases the relative risk of obesity. The three countries are highly dependent on cereal diets, in the public distribution systems cereals are available at below the market price which in turn increases the consumption of carbohydrates, especially in people from low socioeconomic status. So, although greater vegetable consumption is generally associated with better health outcomes, this should be taken in the context of the whole diet rather than its individual components. Further research should investigate all dietary components. P = 0•070 1•63 (1•09-2• The results show that people with bipolar disorder are at greatest risk of obesity, which is mirrored in global literature, where the SMI subgroup with the highest prevalence of obesity is bipolar disorder. This is possibly because people with bipolar disorder can experience periods of severe depression which are associated with weight gain, similar to those with major depressive disorder, however they are also likely to experience obesogenic side effects from antipsychotic medication and mood stabilisers. Similar to the general population, obesity was associated with higher relative risk of diabetes, hypertension, high cholesterol, and hypertriglyceridaemia. These are all considered the key features of metabolic syndrome which is associated with three times greater risk of cardiovascular disease and five times greater risk of developing type 2 diabetes, explaining why people with SMI have a 53 % higher risk of developing cardiovascular disease. Obesity was associated with hyertriglyceridaemia, which is considered the hallmark of dyslipidaemia and possibly the major cause of all other lipid abnormalities seen in this BMI range. Better screening of lipid abnormalities in people with SMI is required to identify those at risk of dyslipidaemia in this population. Also, obesity was associated with pre-diabetes, which supports the theory of obesity being a strong determinant of pre-diabetes due to the vital role of adipose tissue in systemic insulin resistance. As in the general population, smoking is likely to decrease the risk of obesity due to the appetite suppressing effects of nicotine. However, chronic smoking is still considered an important modifiable risk factor with regards to the excess mortality of people with SMI due to its impact on the cardiovascular system through atherosclerosis and the increased risk of lung cancer and chronic obstructive pulmonary disease. Smoking cessation interventions used in HICs have been less successful in LMICs hence the urgent need for culturally relevant interventions to be developed. Strengths and limitations There are limitations that deserve further attention. First, due to the cross-sectional nature of the study, it is not possible to determine the causality of the associations. Second, we found there was a lack of standardisation in the HbA1c laboratory analysis across the sites; although we used a laboratory in each country which was used in routine clinical practice. Further research is needed to investigate the complexities of HbA1c measurement and show how it affects the prevalence of diabetes across these countries. Third, there was considerable variability in the classification of SMI diagnosis across the different countries, and we found that far more people were diagnosed with major depression with psychotic features using the MINI v6.0. We cross-analysed this with the selfreported diagnoses and found that the vast majority were matching, which suggests that assessor error was unlikely. Fourth, the sample was exclusively from a tertiary centre cohort rather than from the community which may have implications for the interpretation of the results. However, a community survey would be prohibitively resource intensive; moreover, patients at these centres are likely to be similar to those in community, primary or secondary care, as they serve as 'walk in' and first point of access services, in the absence of any community mental healthcare. Despite these limitations, the cross-sectional survey spanned three countries and recruited nearly 4000 participants providing good levels of precision for a population that is often neglected in this area of research. The study also included participants with all forms of SMI, providing evidence about the prevalence of obesity in each type of severe mental disorder. By including analyses using the Asian cut-offs for BMI, our results are easily comparable to other literature in South Asia. --- Conclusion There is a high prevalence of obesity in the SMI population in Bangladesh, India, and Pakistan. Obesity was associated with chronic disease in this population and, contrary to HICs, people with higher income and higher levels of educational attainment were at greater risk. Food literacy may not correlate with healthier dietary choices and so better dietary education should be prioritised for people with SMI from all levels of educational attainment. People with SMI and obesity could benefit from screening programmes for non-communicable diseases and context appropriate lifestyle interventions to prevent and treat obesity. We have identified the population at higher risk of obesity which provides useful information for intervention development; however, more research is required to identify key barriers for a healthy lifestyle in this population. --- Supplementary material The supplementary material for this article can be found at https://doi.org/10.1017/jns.2023.100.
Obesity is one of the major contributors to the excess mortality seen in people with severe mental illness (SMI) and in low-and middle-income countries people with SMI may be at an even greater risk. In this study, we aimed to determine the prevalence of obesity and overweight in people with SMI and investigate the association of obesity and overweight with sociodemographic variables, other physical comorbidities, and health-risk behaviours. This was a multi-country cross-sectional survey study where data were collected from 3989 adults with SMI from three specialist mental health institutions in Bangladesh, India, and Pakistan. The prevalence of overweight and obesity was estimated using Asian BMI thresholds. Multinomial regression models were then used to explore associations between overweight and obesity with various potential determinants. There was a high prevalence of overweight (17•3 %) and obesity (46•2 %). The relative risk of having obesity (compared to normal weight) was double in women (RRR = 2•04) compared with men. Participants who met the WHO recommendations for fruit and vegetable intake had 2•53 (95 % CI: 1•65-3•88) times greater risk of having obesity compared to those not meeting them. Also, the relative risk of having obesity in people with hypertension is 69 % higher than in people without hypertension (RRR = 1•69). In conclusion, obesity is highly prevalent in SMI and associated with chronic disease. The complex relationship between diet and risk of
Introduction The link between school failure and life course failure is well established in the research literature. The risk not only for outcomes such as poverty, social exclusion, and ill health but also for crime and delinquency is dramatically higher among youth who exit education before having reached an upper secondary/high school diploma. Sweeten, Bushway, and Paternoster reported that in the U.S. high school dropouts are more than 70 percent more likely to be unemployed than high school graduates and that their annual income is on average substantively lower than that of graduates. Their health is worse and, not least, they commit more crime. These observations from the U.S. are in all important respects repeated in Europe . For example, in Sweden high school dropouts are much less likely to be able to support themselves from market income, they have a mortality risk three times that of graduates, and are five times as likely to have been sentenced to prison by the age of thirty . Although the claim that there is a link between school dropout and criminal behavior is uncontested, the causal direction of this relationship is less evident. Crime and delinquency are known to increase the risk for school dropout, which in turn may promote further delinquent behavior. Sweeten, Bushway, and Paternoster have made the most serious attempt to date at isolating the effect of high school dropout on subsequent criminality in the U.S. The general conclusion from that study was that there is no such effect, except a small crime-inducing effect in some very specific cases . The purpose of this paper is to revisit the issue of an independent effect of high school dropout on criminal behavior. In a sense, the paper takes off where Sweeten and colleagues stopped and follows their recommendation to use matching methods to reanalyze this relationship . However, the paper also expands the analysis to the issue of what happens after dropout in terms of resource attainment and how that is linked to continuation and disruption of criminal careers. Thus, by means of propensity score matching, the paper tries to answer the question, "Is high-school dropout independently linked to subsequent criminal behavior or are both just part and parcel of an already unfavorable life career?" On the basis of the result from the matching analysis, we turn to the question, "What role does resource attainment after dropout play for the dropout-crime link?" This is analyzed by means of event history analysis. --- Education, school attendance, and delinquency: previous research The bulk of previous research on the link between educational achievement and crime are-explicitly or implicitly-concerned with the causal direction from education and/or school attendance to crime. Most of the more ambitious studies in this regard are by economists exploiting experiment-like situations. For example, Machin, Marie, and Vujić analyzed the effect of a school reform which increased the age students left school in England and Wales and found a significant reduction of property crime as a result of the reform. Hjalmarsson, Holmlund, and Lindquist evaluated the effect of the increase from seven to nine years of compulsory schooling in Sweden on the risks for convictions and incarceration, both of which were substantially reduced. Meghir, Palme, and Schnabel analyzed the same reform and also found a spillover, crime-reducing effect on the offspring of those actually targeted by the reform. The authors attributed these results to increased household resources and better parenting. In the U.S., Jacob and Lefgren and Luallen estimated the incapacitation effect of schools using information on teachers-in-service days and teacher strikes, respectively, and found reductions of property crime rates but increases of violent crime on school days. Similar results were obtained by Berthelon and Kruger , who analyzed the effect of increasing the length of school days in Chile. Åslund et al. found an incapacitation effect on property crime from a trial period in which vocational tracks in upper secondary school were increased in length from two to three years in some Swedish municipalities. However, they found no effect on violent crime and they found no long-term effects. Other, more variable-oriented approaches include Kim and Clark who used propensity score matching techniques to isolate the effect of educational attainment on criminal behavior in data from New York State. However, these authors did not analyze high school education but the effect achieving an in-prison college education has on recidivism. They found a small but significant reduction of the risk for recidivism in achieving a college education while incarcerated. Analyzing four waves of the U.S. National Longitudinal Survey of Youth , Mowen and Brent showed how school suspension significantly increases the odds of arrest. Aaltonen, Kivivuori, and Martikainen analyzed the hazard of criminal conviction in a large sample of Finns and found an independent crime-reducing effect of graduating from high school on the risk for criminal conviction, controlling for a range of potential confounders. Bäckman and Nilsson found, in a structural equation model on Swedish birth cohort data, that poor educational achievement in adolescence increases the risk for "deviant behavior" , which in turn increases the risk for educational failure in early adulthood. Savolainen et al. , using Finnish data, identified adolescent educational marginalization as a key factor linking childhood socioeconomic status to the risk of criminal offending in early adulthood. Of studies engaged with the opposite causal direction, one of the most prominent analyses is that by Kirk and Sampson , in which PSM was employed to isolate the effect of juvenile arrests on the risk of dropping out of high school and on the chances of college enrollment among Chicago students. They found a large and robust effect of arrests on the high school dropout risk and a significant effect on four-year college enrollment. Sweeten and Hjalmarsson obtained similar results on nationally representative U.S. data. However, Hjalmarsson's results suggest a more robust effect of incarceration than of arrests. Sweeten, Bushway, and Paternoster listed a number of studies explicitly focused on the effect of school dropout on criminal behavior. The results from these are mixed. Some found that criminal offending declines after dropout , while others, with longer follow-ups, found increasing crime rates after dropout . Some studies emphasize that the reason for dropping out of school may influence subsequent delinquency. In one of these, Jarjoura utilized the and distinguished between various reasons for dropping out of high school and between various types of offences . The analyses included controls for a wide range of potential confounders, including previous arrests. He found that the reason for dropping out matters for the link between dropout and crime. For example, dropping out because of pregnancy, getting married, or dislike of school was linked to increased risks for violent offending whereas dropping out because of expulsion or for "other" reasons increased risks for theft and selling drugs. In a later article, Jarjoura specified his analyses by examining whether the relationship between dropout and crime was conditioned by socioeconomic origin. In some instances, it was. Dropping out for school reasons or for personal reasons increased the risk for violent offending among upper-status youth, while dropping out for economic reasons reduced the risk for theft offences among lower-status youth. Sweeten, Bushway, and Paternoster suggested that the identity associated with expected destinations after dropout also have implications for the dropout-crime link. For example, dropouts who expect to move on to positive identities such as permanent employment or marriage would not be at a greater risk for criminal activity. However, when fitting quasi fixed-effects models for controlling away selection factors for dropout and crime, they found virtually no support for the hypothesized relationships. The only instance in which some support was found was an indication that the small group of males who drop out of school for economic reasons decrease their delinquency, albeit only for a short period of time. Thus, to summarize, it seems evident that in a broad sense the link between education and crime is strong whereas the evidence of a causal effect of high school dropout on crime is mixed. With regard to education in general, a causal effect on crime can be disputed. The evidence of an incapacitation effect of school on property crime seems fairly robust whereas the longer term "human capital effect" is more difficult to establish when sophisticated methods for handling selection are used. The fact that the most serious attempt hitherto to isolate the effect of dropout on crime failed to do so could suggest that the case thereby should be closed. However, not even these authors were ready to do so. They pointed to the importance of future research focusing on the actual post-dropout experience . With the data available for the analyses in this article, we are able to follow up on dropouts a couple of years after the dropout event, making it possible to identify actual destinations and resource attainment during the postdropout period. The analysis focuses on the effect of finding a foothold in either the labor market or in education. Thus, we are also able to account for the fact that many dropouts re-enter high school . --- Theoretical considerations: social control, strain, and resource attainment The theoretical starting point of most studies by criminologists on the dropout-crime link has been strain theory and/or social control theory . Briefly put, social control theory suggests an increase in delinquent behavior as a consequence of dropping out. This is because individuals' natural inclination to criminality is inhibited by social bonds, and for a teenager school would be among the most important providers of such bonds. Thus, all else being equal, dropping out would result in reduced social bonds and, hence, criminal behavior would increase. Sampson and Laub put much less emphasis on the inclination to commit crime in their "age graded theory of informal social control". Nevertheless, this is in many senses a development of Hirshi's original, emphasizing how providers of social bonds change across the life course, thus claiming that if the social bonds in school are replaced by bonds in another setting-such as the workplace-the probability of committing crime is again inhibited. In a similar fashion, Sweeten, Bushway, and Paternoster hypothesized that if an individual after dropping out moves on to a new positive social identity, such as "worker" or "parent," new social bonds are likely to emerge that again will reduce the risk for delinquent behavior. However, if an individual after dropping out becomes, for example, long-term unemployed, no new positive social identity or social bonds will be formed and, hence, the risk for delinquent behavior will increase. As already noted, Sweeten, Bushway, and Paternoster found only limited support for this position. However, the crime-reducing effect of school attendance found in several of the studies referred to above points in the direction of social control theory. Nevertheless, some findings in this vein of research also speak against the theory, such as the finding that it is primarily property crime which is reduced by school attendance whereas violent crime either increases or is not affected at all. Strain theory, too, suggests an increase in criminality after dropping out of school. However, here the mechanisms are different. Basically, Merton's original work suggested that it is the frustration produced by a mismatch between available means and aspirations that induce criminal behavior. Thus, the theory claims that the less the means, the higher the inclination to commit crime, provided aspirations are fairly similar across groups, and in fact they seem to be . Obviously, this fits well with one of criminology's most stable facts: the negative correlation between criminal behavior and socioeconomic status. Another school of thought which also fits well with this stable fact is the resource perspective and the life course theory of cumulative advantage and disadvantage. In many respects, this perspective is similar to strain theory, primarily in the sense that it emphasizes shortage of means/resources as the prime mover of the relationship between, in this case, school dropout and subsequent criminal acts. Life course theory posits that individuals construct their own life course through their choices and actions, but within the constraints of historical and social circumstances. People are regarded as active agents whose access to resources and capacity to make use of them determine their levels of opportunity and chances in life . For instance, poverty during childhood affects educational achievement, health outcomes, and delinquency in adolescence, which in turn affect the risk for low paid jobs, unemployment, and, ultimately, social exclusion in adulthood . Available resources determine the level of opportunity at different stages during the life course. Thus, the mechanism suggested here is the mere number of alternative pathways: the more narrow the range of opportunities people are exposed to, the more likely that they will end up in criminal activity. Such a process, in which each case of resource deficiency or disadvantage leads to additional negative consequences, has been labelled "cumulative continuity" or "cumulative disadvantage" . Obviously, this school of thought is easy to combine with strain theory, by suggesting that the reason why fewer opportunities lead to an increased inclination to commit crime is the frustration created by comparing one's life chances to others with a greater number of possible pathways. Dropout, Resources, and Delinquency Sweeten, Bushway, and Paternoster implicitly point in the direction of the cumulative disadvantage and resource perspectives when they comment on the finding that the observed firstorder correlation between delinquency and dropout is driven by time-stable differences between those who drop out and those who do not by suggesting that "concern about the event of dropout may be misplaced. Instead attention must be focused on the process that leads to dropout and criminal involvement . . ." . However, the present article takes a step further by directing particular attention to the importance of actual resource attainment after dropout. Thus, it is argued in accordance with Sweeten, Bushway, and Paternoster that the post-dropout destination is indeed important for the dropout-crime link, but not primarily because of the new identity and the new social bonds reached, but rather because of the resources linked to these destinations. We concentrate on the resources created by labor market attachment, participation in education, and graduation from high school . We claim that when a young person drops out of school, a valuable resource is withdrawn. In Sweden, as in most other Western countries, educational failure is a strong predictor of precariousness-for example, social exclusion-across the life course . In a recent government report, school failure was pointed out as the single most important predictor for inactivity among Swedish young adults during the 2000s . In a life course perspective, the withdrawal of a resource has longterm implications unless the loss is compensated for by other forms of resources or the lost resource is regained. However, if a resource is lost at one point in time, it will put that person in a worse position than those who have gained the same resource but otherwise are at the same resource level, simply because the available opportunities will be fewer. For example, if someone drops out of school but manages to find a job his or her prospects will still be poorer relative to those of the workmates who did not drop out, but compared to fellow dropouts who did not find a job, he or she will be better off. Thus, the prediction we make from these theoretical perspectives is that high school dropout increases the risk of committing crimes. However, regarding what happens after dropout, the predictions differ slightly. Thus, we firstly hypothesize that high school dropout increases the risk of offending. Secondly, with reference to the example in the previous paragraph, from both strain theory and the resource perspective, we would expect that being occupied reduces the risk of offending, but has no implications for the effect of dropout on crime. On the other hand, a competing hypothesis would claim that if new social settings, such as a work place, create new social bonds which replace those lost from dropping out of high school, we would instead expect, according to social control theory, that being occupied reduces not only the risk of offending, but also the effect of dropout on crime. Finally, put in the context of life courses, the resource perspective and-indirectly-the strain theory claim that resources lost along the way can be regained. Thus, we expect that if the lost educational resource among dropouts is regained, the initial dropout effect disappears or is at least heavily reduced. --- The Swedish context This section briefly describes three societal areas in Sweden of importance for the topic of the present article: youth crime and the institutional setup for societal reactions; the educational system, with special focus on the upper secondary level ; and the social structure in terms of inequality, the welfare state, and the youth labor market. --- Youth Crime and Societal Reactions There is no separate juvenile court system in Sweden. All youths whose cases go to court are thus dealt with by the same district courts that deal with adult offenders. There are a number of different sanctions that may be awarded to youths, ranging from prosecution waivers and summary sanction orders to prison. However, prison sentences for youth at 15-20 years of age are extremely rare. The most frequent reactions to crime in these ages are prosecution waiver and summary sanction order awarded for minor crimes. Although prosecution waivers and summary sanction orders do not pass through court, they are still registered as convictions since the question of guilt is regarded as settled . --- The Educational System Swedish compulsory school is nine years long. The standard age for finishing is 16. High school is voluntary but after compulsory school nearly all students move on to the upper secondary level . High school education is a legislated right. However, access to the regular national programs is restricted. The regulations which applied to the cohorts in focus here implied that those who did not meet the requirements for access to a national program were offered a slot in the "individual program," which aimed to prepare students for a transfer to one of the national programs. In each birth cohort, 5-10 percent were enrolled in the individual program. Since this program picked up less motivated students and students with lower cognitive ability, dropout rates were particularly high . The national programs contain both vocational and academic tracks. For the cohorts in focus, dropout rates in vocational tracks were about 25 percent and in academic tracks around 13 percent . Approximately half of all dropouts manage to achieve a high school diploma by re-entering school . The national programs are three years long. --- Inequality, Welfare State, and Labor Market The comparatively low degrees of income inequality and relative poverty rates have long been a salient feature of Sweden and the other Nordic countries . In the late 2000s, the Swedish Gini coefficient 1 of equivalized disposable household income was 0.26. The corresponding figure for the U.S. was 0.38 and the OECD average was 0.31 . The economic downturn of the early/mid 1990s struck Sweden particularly hard, with a rapid increase of unemployment. Youths were a group particularly hard hit by the unemployment crisis. The youth unemployment rate increased by a factor of six over the course of a few years: from 3.0 percent in 1990 to 18.1 in 1993. 2 Starting during the latter half of the 1990s, the Swedish economy recovered surprisingly quickly, but unemployment figures have never returned to pre-crisis levels. Thus, Sweden still struggles with comparatively high unemployment rates, among youths in particular. 3 Moreover, those who left school early constitute a group identified as experiencing particularly serious difficulties in finding a foothold in the labor market during the 2000s . Despite these difficulties, Sweden is still among the countries with the lowest levels of inequality. Inequality of opportunity, as measured by intergenerational correlations of income, seems to be low as well, at least as compared to the U.S. . So, if cumulative disadvantage theory is correct and if it is true that welfare state arrangements may counteract its proposed mechanisms, for instance, by redistributing resources and thereby reducing inequality of both outcomes and opportunities, we could claim that Sweden provides a conservative test case. --- Methodological Considerations: data, measurements, and analytical strategy --- Data and Measurements Available data comprise all persons born in 1980 and 1985 who were resident in Sweden at the age of 16 . The data extend through the year 2010 and have been compiled by combining information from Statistics Sweden's LISA database, the In-Patient Discharge Register at the National Board of Health and Welfare, the Convictions Register at the National Council for Crime Prevention, and student registers from the National Agency of Education . The data set includes information on criminal convictions, incomes, school results, educational level, hospital care , and demographic variables. Much of this information is also available for the cohort members' parents. Thus, registered convictions are used for measuring criminal activity, which means that to be included in the data a criminal act must become known to the judicial system and the question of guilt must have been settled, either by a court or, in the case of prosecution waiver and summary sanction orders, by a prosecutor. Compared to self-reported crime, registered convictions will on average include a greater share of more serious crime, but petty crime still dominates . On the other hand, the fact that the information comes from public records virtually eradicates the issue of non-response, a problem which can be particularly challenging when analyzing marginalized groups using survey data. Swedish police and prosecutors are bound by the legality principle meaning they must arrest and/or prosecute whenever they suspect a crime has been committed, but conviction rates across cohorts may still depend on e.g. police activity and the inclination of the public to report crime to the police . However, this is unlikely to change much between two cohorts who only differ by five years of age, as in our analyses. Nevertheless, cohort membership is included as a variable in the initial logit regressions. As mentioned above, nearly all students who graduate from compulsory school move on to high school at the age of 16-that is, in 1996 and 2001 for the two cohorts, respectively. For reasons of privacy, data do not include actual birth dates , which means we do not know the day individuals turned 15 and became criminally responsible. For this reason, we start counting convictions from January 1 of the year they turn 16. In most cases, conviction data include the date of the crime; in those cases for which this information is missing, the crime date has been set equal to the conviction date. 4 For the sake of simplicity, the date of crime has been recoded to the month of crime. School data include information about whether a person is registered as a high school student, what program, and what educational year he or she is in, in October each year. This information is used to identify dropouts. Someone who is registered as a student one year but not the year after is coded as dropout, provided no diploma has been gained. To enable matching, a "dropout window" corresponding to October year 2 through October year 3 was established. This means that in the analyses dropouts in this period will be compared to non-dropouts during the same period. A total of 3,266 male students and 3,198 females dropped out in this observation window. A total of 101,706 males and 96,562 females were under risk for dropout, i.e. they were registered as high-school students in October year 2. An important reason for choosing this particular observation window is that we need some time after January of the year cohort members turned 16 to pass in order to be able to measure convictions also before dropout. One consequence of this, of course, is that those 1-2 percent who never start high school and dropouts in year 1 of high school are not included in the analyses, and that dropouts after the dropout window are not treated as dropouts. We know that the earlier the dropout, the poorer the prospects for certain events, such as finding a job . Since we both exclude a group with poorer prospects from the dropout category and define a group of actual dropouts with better prospects as non-dropouts, the implications of this for the findings are that we probably underestimate the effect of dropout on crime slightly. However, we do include the high school graduation as a time-varying covariate in the final event history analyses, which will capture both later dropouts and "stopouts", that is, dropouts who return and finally graduate. As will be further described below, the first step of propensity score matching is a logit regression in which the log odds for "treatment"-in this case, dropout during the dropout window-is the dependent variable. This regression analysis should include as many potential predictors as possible . Here the following variables are included: the average monthly conviction rate prior to the dropout window divided into two variables, one covering the first 12 months from January of the year the cohort member turns 16, and the other covering the remaining 9 months before the dropout window; final grades from compulsory school divided into four groups ; family type of the household the cohort member lived in at age 16 ; low parental educational level ; ethnic background ; means-tested social assistance received by parents when the cohort member was 8-17 years of age; an indicator of parent's custodial sentence when the cohort member was 0-17 years of age; a dummy variable indicating whether the cohort member ever stayed in a hospital before the dropout window; birth cohort; high school track, separated into individual program, academic programs, and vocational programs. The aim of including these predictors is to cover important clusters of life course risk factors before the dropout window. The factors cover socio-demographic background factors, social problems in the family of origin, school results and school choice, health, and criminality, and are entered as dummy variables in the regression models . Factors measuring aspects of these clusters have been shown to fit well with a notion of cumulative disadvantage where disadvantage is a function of previous disadvantages . The final step of the analyses is an event history regression analysis in which only the matched sample from the PSM is used to estimate the effect of resource attainment after the dropout window on the hazard rate of further convictions. Since we use the matched sample, we need not include any of the independent variables from the initial logit regression. Instead we include only three covariates in these models. Firstly, we include a dummy variable indicating dropout or non-dropout during the dropout window. Secondly, we include a time-varying covariate indicating whether or not the cohort member achieved a high school diploma, either by returning to regular high school or by turning to adult education. The third factor is a time-varying covariate indicating labor market attachment. It is divided into four categories: the core labor force , capturing those with enough labor market earnings to support themselves during a year; unstable labor force , capturing those with some earnings though less than the CLF; students, which captures those registered as students and earning less than the CLF; and inactive, capturing those with virtually no labor market income who are not registered as students. 5 Both of these time-varying covariates build on information aggregated by year whereas the outcome in these analyses is measured on a monthly basis. Thus, we need to decide on a month in which any change in these factors occurs in data. Any such decision is bound to be arbitrary and here changes are set to June of the year a change occurs. --- Analytical Strategy To isolate the effect of dropout on crime, PSM is employed. To perfectly assess the effect of dropout on the risk of future convictions, we would have needed to randomly assign individuals to dropout status. Since this is not possible, we instead apply PSM in order to produce comparable groups of dropouts and non-dropouts. With PSM, we estimate the probability of dropping out during the dropout window by means of logistic regression using the observed characteristics listed above. The propensity score obtained is then used to match the "treatment group" with "untreated" social twins who, based on the propensity score, were predicted to drop out but did not. There are a number of alternative methods for matching. For the analytical strategy outlined in the present article the most straightforward technique would be 1-to-1 nearest neighbor matching, in which the nearest neighbor, in terms of propensity score, is chosen as a match. However, the most important criterion for choosing matching algorithm is that the observed covariates are balanced after matching . Thus, there is need to evaluate other matching techniques as well, to ensure covariate balance. Nearest neighbor matching is often performed using a caliper, which involves specifying a limit for how much the propensity score can differ within a pair. The drawback of 1-to-1 nearest neighbor matching with caliper is that the use of the technique involves a risk of losing observations, which in some instances makes the results less efficient. Given the fairly large sample used in this study, however, this risk is reduced. 6 The propensity score is a "balancing score," which refers to the need for a similar distribution of observed covariates between treated and untreated subjects who have similar values on the propensity score. This assumption needs to be carefully investigated to ensure the comparability of cases . The most important output produced by PSM methods is most commonly the average effect of treatment on the treated , which is simply the outcome difference between the matched treatment and non-treatment groups . Despite the convincing results reported in the seminal article by Rosenbaum and Rubin , indicating that PSM does a very good job of mimicking truly randomized designs, criticism has been directed at the belief that this method can replace randomization. Not least, it has been shown that PSM is sensitive to the set of variables included in the regression analysis used to estimate the propensity scores . Thus, there is always a risk that the conditional independence assumption is violated, that is the assumption that there are no unobservables that can bias the probability of treatment and the effect of treatment on outcome. However, there are techniques available to simulate potential confounders and how these could affect the robustness of results . The outcome in the PSM analyses is a dummy variable indicating any conviction during a followup period of 26 months after the dropout window. This is a static outcome which does not capture processes after exposure. However, an important contribution of this study is the emphasis of postdropout events. In order evaluate how destinations and resources gained after dropout impact on the risk of offending, event history analysis on the matched sample is utilized. In event history analysis, we analyze the risk across time under risk of failure . In this study, failure is the first criminal conviction and time under risk is the time passed since the ending of the dropout window. The piecewise constant exponential model is used to estimate the hazard rate. In this model, exposure time is included as a dummy variable for each time period, in which the hazard rate is assumed constant, but is allowed to vary between the specified time periods. In the analysis below the baseline hazard is allowed to vary every second month during the first year, every fourth month during the second year, and thereafter every sixth month. The follow-up period ranges from the end of the dropout window through the year the cohort member turns 24. Observations with no conviction by December of this year or who emigrate or die prior to that are right-censored. --- Results --- High school Dropout and Criminal Convictions: Descriptives The graphs in Figure 1 show the monthly conviction rates among dropouts and non-dropouts before and after the dropout window in the two birth cohorts in the data. In these graphs, dropouts are defined as those who dropped during the "dropout window." Overall conviction rates are greater among dropouts than among non-dropouts for both females and males. The monthly rates are lower after the dropout window than before it for both groups and both sexes, as we would expect from the usual age-crime curve which in Sweden peaks around age 17 . However, since we look at monthly conviction rates, the pattern for women is strikingly unstable despite the fact that the graph shows three-year moving averages. This is, of course, due to the low conviction rate among women. On average, the absolute decline is greater among dropouts, whereas the relative decline is greater among non-dropouts. -Figure 1 about here - --- High school Dropout and Criminal Convictions: Propensity Score Matching Taken together, the graphs in Figure 1 underscore the importance of taking selection to the dropout group into account, the initial conviction rate in particular, when evaluating its effects. As discussed above, the approach chosen to accomplish this here is PSM. These analyses are run separately for men and women. Since the logit regression models are merely tools to produce propensity scores, the results from the regression analyses are not reported, but are available upon request. Briefly stated, however, the regression results generally point in the expected direction. Factors indicating resource deficiency prior to the dropout window predict a higher propensity to drop out. There are some gender differences. Means-tested social assistance received by parents is a more important predictor of men's dropout risk whereas ill health, as indicated by hospital stays, cohort membership, and high school track, are more important predictors for women's dropout risk. Previous criminal convictions are particularly strong predictors for both male and female dropout risks, with odds ratios around four. The aim of the PSM analyses is to isolate the effect of high school dropout during the dropout window on subsequent conviction risks. Here 1-to-1 nearest neighbor matching is employed. Other matching procedures, such as Kernel and radius matching, produce virtually identical results. All actual dropouts resulted in a match within common support . 7 Table 1 shows the result from the PSM analysis for men and women separately. Here the treated are compared with a control group of "social twins" who-with respect to the observed variables included in the regression analysis-are estimated to have a similar propensity to drop out, but did not. -Table 1 about here - The outcome in the PSM analysis is the likelihood of criminal convictions during a follow-up period of 26 months after the dropout window. The table shows the difference between the treated and controls, both before and after matching. The difference between unmatched men is particularly high. It is heavily reduced after matching, but still statistically significant. The initial difference between unmatched treated and controls among women is less substantive, and after matching the difference is no longer statistically significant. This suggests that dropping out of high school has an independent crime-inducing effect among Swedish young men while no such effect is evident for women. Even though female dropouts have a higher conviction rate than non-dropouts, the dropout event per se does not add to this difference. Other aspects of resource deficiencies and cumulative disadvantage seem more important for the selection of young Swedish women into criminality. As mentioned above, although matching is based on the predicted probability from the logistic regression, it is assumed that values of the variables included in the regression model are balanced, namely, that they are reasonably equal between the treated and controls. The literature suggests that a percent bias below ten indicates balance . Tables A1 andA2 in the appendix show that the balance for both men and women is very good. In no case is there a percent bias above five after matching and the variances are virtually identical. None of the alternative techniques referred to above are able to improve this balance. However, the appendix tables show only balance with respect to means and this does not suffice with respect to continuous variables. In these cases it is important to reassure that not only the means are balanced, but the whole distribution . Although all included covariates are categories operationalized as dummy variables, some of them are constructed on underlying continuous measures. Of these only the variable measuring grades from compulsory school is based on a continuous variable with a range wide enough to require further checks. Thus, further balance checks of this factor were performed, such as quantile-quantile plots and the Kolmogorov-Smirnov test , which show that balance of the distribution was achieved as well . -Figure 2 about here -The good match is further confirmed by the graphs in Figure 2, showing monthly conviction rates before and after the observation window in the matched male sample. In contrast to what was shown in Figure 1, these graphs show fairly equal crime rates before the dropout window for both men and women. The level after the dropout window is substantively lower for non-dropouts through the observation period. This answers to the first hypothesis above, maintaining both a short-and long-term effect of dropout on crime. However, for women, the levels continue to be fairly equal after the dropout window as well, suggesting that the hypothesis must be rejected for females. Despite the good match indicated by both the balance tables in the appendix and the graphs in Figure 2, the credibility of these results are dependent upon the CIA, that is, that there are no unobservables that can bias the results. Despite the many merits associated with using administrative register data, such as a large N, the longitudinal character, and the absence of non-response, the number of available covariates is fairly limited. This increases the vulnerability to omitted variable bias. Since the result of the PSM is insignificant for women, this concerns only the results from the analysis on men. However, simulations using a technique for evaluating the likelihood of the existence of an omitted variable or a set of omitted variables that could explain away the observed effect of treatment on the outcome, suggested by Ichino, Mealli, and Nannicini , indicate that the existence of an omitted variable with the properties needed to drive the observed effect to zero is unlikely . It should be noted though that this test account only for the potential impact of categorical variables. It does not take into account properties of continuous potential confounders. --- High school Dropout, Resource Attainment, and Criminal Convictions: Hazard Regression One aim of the propensity score analysis was to make dropouts and non-dropouts as equal as possible up to the point of dropout. The second step of the analysis is to estimate how post-dropout events impact the risk of criminal convictions. As already mentioned, event history analysis is used for this purpose. Since the PSM rendered significant results only for men, this part of the analyses is only performed for the male portion of the sample. By using only the matched sample, indicators of circumstances before the dropout window are already controlled for and need not be included. Figure 3 shows the estimated baseline hazard rate from a model without covariates. The expected pattern with a declining hazard rate across exposure time prevails. More interesting for the purpose of the present study are the estimates reported in Table 2. Model 1 confirms the result from the PSM analysis by indicating an elevated hazard by a factor of 1.54 for dropouts as compared to the matched controls. In model 2, we include a time-varying covariate measuring labor market attachment. The result indicates that being inactive is associated with an elevated hazard of approximately the same magnitude as dropouts in model 1. Note also that the inclusion of the labor market attachment factor only has limited impact on the effect of dropout. The latter corresponds to the prediction made in hypothesis 2, that labor market attachment would reduce the crime risk without having any implications for the effect of dropout, whereas hypothesis 3 can be rejected. -Figure 3 and Table 2 about here -In model 3, we include a time-varying covariate indicating whether or not a high school diploma was gained during follow-up. The effect is salient: the hazard for being convicted among those who gained a diploma during follow-up is less than half of that of those who did not. Moreover, the effect of dropout on crime is substantively reduced, which corresponds well with the prediction maintained by hypothesis 4 above. Finally, in model 4, the two time-varying covariates are included simultaneously, but that does not have any implications for the findings in the previous models. Thus, the event history analysis provides some support for all perspectives discussed above, but in the only instance in which social control theory on the one hand and the resource perspective and strain theory on the other render diverging predictions , the results seem to lend support primarily to the resource perspective and strain theory. --- Discussion The main result from the analyses in this article is that high school dropout has a crime-inducing effect for Swedish men born in the 1980s. However, for the corresponding group of women, no such effect can be discerned. For young women, the first-order correlation between dropout and crime is instead driven by time-invariant differences and differences in resource accumulation before dropout. For men too, a large part of the first-order correlation is driven by factors measured before dropout, but here high school dropout appears to have an independent effect on the risk of subsequent offending. However, the main contribution of this study to the field is the analyses on how postdropout destinations and resource attainment impact both the conviction risk per se and mediate the effect of dropout on criminal behavior. Although no causal claims are made in this part of the analyses, the results clearly show that finding a foothold in the labor market or in education, and whether or not a high school diploma is achieved after dropout have clear implications for subsequent criminal conviction risks. Those finding an occupation reduce their conviction risk, and gaining a high school diploma nearly eradicates the initial risk induced by dropping out. These findings are all the result of an effort to test a couple of hypotheses derived from social control theory, strain theory, and the resource perspective. Taken together, all perspectives receive some support in the analyses. All predict a crime-inducing effect from dropout and all predict that being occupied, either in the labor market or in education, reduces the conviction risk as well. However, the preoccupation with social bonds in social control theory suggests that adding resources would not be so important unless the resources come with new social bonds. The fact that postdropout graduation significantly reduces the conviction risk independently of labor market attachment, and that the effect of dropout is substantively reduced when this indicator is included in the regression model, speaks against social control theory. On the other hand, it could be claimed that in order to be able to take up high school studies after a dropout, some kind of social bonds need to be present, at least with society as a whole in some vague sense. Thus, the results should not be seen as a rejection of social control theory, only that for this particular question strain theory and the resource perspective might provide better, or at least, more clear-cut explanations. Even more difficult than to distinguish between social control theory on the one hand and the strain and resource perspectives on the other, it is to distinguish between the latter two. With the data available for this study, this is probably not possible. The two perspectives make the same predictions. Strain is more explicit with the active mechanism at the individual level, namely, frustration. The resource perspective instead focuses more on the number of available options. Very simplified, the resource perspective reduces the mechanism to a matter of likelihoods, implying that the fewer the number of possible outcomes, the higher the probability of one particular outcome. Of course, reality is more complex, not least because some options are more attractive than others and preferences for one option over the other may vary depending on, for example, social background and context. Regardless of this, the resource perspective cannot on its own explain why a relative shortage of opportunities would lead to crime. Here strain theory may provide one answer: relative deprivation of means to achieve desired goals creates frustration. However, for all practical purposes, distinguishing between strain theory and the resource perspective may seem pointless. Both perspectives suggest that it is resource attainment which governs the likelihood of a criminal conviction. Thus, the distribution of resources in a society becomes a key factor in crime prevention. As mentioned above, Sweeten, Bushway, and Paternoster conclude that attention should be focused on those processes that lead to dropout and crime rather than on the actual effect of dropout on crime. This study lends support to that view, since for both men and women most of the difference between dropouts and non-dropouts is explained by previous experiences and resource attainment, although for men high school dropout add to the already elevated risk. However, it is worth highlighting that the fact that convictions are used as an indicator of criminal activity means that the risk of being detected is an important factor and if dropouts tend to spend more time in criminogenic public environments where the risk of detection is greater, then that could be part of the explanation of the elevated risk of conviction among dropouts. Thus it could be claimed that dropouts not only ''reorder short-term situational inducements to crime'' but also experience increased risks of being officially processed as criminals by shifting the normal routines of life. Register data provide no information that could help us investigating that issue. For that other types of data are needed, but the fact the highest conviction risk is found among inactive dropouts indicates that this interpretation might hold too. On other hand, a large part of offences committed by youths are committed in connection to the school environment and here both the detection risk and the willingness to report crimes are high. Independent of this potentially alternative interpretation of the dropout effect is the claim that the most important implication of this study is the significance of a "second chance". The availability of an opportunity structure, which provides those who fail at one point in time with a chance to repair the damage at a later point in time, appears desirable. In fact, the Swedish system for a second chance to achieve a high school diploma seems to work comparably well and is an important explanation to why Swedish high school dropouts run a much lower risk for labor market exclusion than dropouts in neighboring Norway . As mentioned above, about half of those who drop out achieve a diploma within a couple of years. Thus, judging from the results presented here, an unintended consequence of Sweden's lifelong learning strategy might be a small but significant reduction of crime among young males. The vicious circles described by the cumulative disadvantage theory, in which each case of resource deficiency leads to additional negative consequences, may thus be interrupted by new resource attainment. In that sense, adding resources to a negative life course may serve as a turning point as described by Sampson and Laub . --- Limitations Another implication of using register based conviction data, in which only crimes known to the authorities and where someone was found guilty are accounted for, is that involvement in petty crime, which remained hidden to the police could potentially trigger dropout, suggesting a reversed causal order. However, across time the likelihood of discovery increases and since conviction histories are included in the PSM analyses this problem is most likely reduced. A second limitation is that despite the apparently good balance reached and that the sensitivity test of the PSM analysis indicates robust results for men, we can never be completely confident that no important factors, predicting both dropout and criminality, have been omitted. In analyses of criminal behavior omitted indicators of personality traits linked to asocial behavior are of particular importance. Such traits are also likely to predict high-school dropout. However, Caliendo, Mahlstedt, and Mitnik show that the influence of omitted indicators of personality traits in evaluations of labor market policies did have strong effects on both treatment and outcome, but still did not impact on the effect of treatment on outcome when labor market histories were included in the PSM. The extent to which these findings from labor economics are generalizable to the field of criminology may of course be disputed, but they do indicate that omitted variables that we know are important need not always flaw our results. --- Notes 1 The Gini coefficient is an often-used measure of income inequality, ranging from 0 to 1, in which higher values indicate higher levels of inequality. 2 Statistics Sweden: http://www.statistikdatabasen.scb.se, accessed August 27, 2015. 3 For the years when the two birth cohorts in the analyses turned 18, 1998 and 2003, the ii Low: 1 st quantile; Medium: 2-4 th quantile: High: 5 th quantile. ii Low: 1 st quantile; Medium: 2-4 th quantile: High: 5 th quantile. --- Appendix
Objectives: To examine the effect of high-school dropout on subsequent criminal convictions and how post-dropout resource attainment in terms of education and employment may modify such an effect. Methods: Propensity score matching (PSM) using administrative register data covering two full Swedish birth cohorts is employed to assess the effect of dropout on convictions. Event history analysis is used to examine the modifying effect of subsequent resource attainment. Results: The PSM analysis reveals an effect of dropout on convictions for men, whereas no evidence of such an effect is found for women. Returning to school after dropout significantly reduces the crime-inducing effect of dropout among men. Finding occupation after dropout also reduces the risk for criminal conviction, but does so independently of the effect of dropout. Conclusion: Since resource attainment after the dropout event modifies the effect on criminal convictions it is concluded that policies such as life-long learning strategies promoting opportunities for a "second chance" may, besides their intended consequences, also have crime preventive side effects.
INTRODUCTION Understanding childhood as a phenomenon intrinsically linked to complex social dimensions, this article discusses the situation of children and adolescents in Brazil, highlighting the intersectionality of inequalities, especially those related to race and class. The methodology adopted involved the analysis of social indicators, the exploration of relevant legislation, and bibliographical research. Until the promulgation of the 1988 Constitution, children and adolescents were not recognized as holders of full rights. Despite the advances resulting from a long process of social and political mobilizations, a position of inferiority persists that subjects these individuals to objectifying power relations. This study expands the analysis of inequalities, including race and class variables into the discussion. The theoretical framework is Afro-centered, and the reflection is based on the concept of intersectionality, proposed by Kimberlé Crenshaw, and enriched by the contributions of notable Brazilian intellectuals of black feminist thought: Lélia Gonzalez, Neusa Santos Souza, Nilma Lino Gomes, and Luciana de Oliveira Dias. The methodology used was bibliographical research, analysis of legislation, and analysis of social indicators. The prevailing inequality in society is not restricted to the dichotomy between rich and poor, men and women, or whites and blacks. It manifests itself, above all, in the disparity between adults and children . This statement, although shocking, reflects the sad reality of broad social acceptance of various forms of domination over the bodies of children and adolescents, even when permeated by oppression and violence. The historical roots of this inequality can be traced to the legal definitions themselves. Under the old minors doctrine adopted by the Brazilian Minors Codes of 1927 and 1979, children and adolescents were characterized by what they were not: they were not "majors", they were not "citizens". It was only with the promulgation of the Brazilian Federal Constitution of 1988, mainly through article 227, that the doctrine of integral protection of the United Nations was introduced into the legal system, promoting advances in the defense of the rights of children and youth . However, even with constitutional recognition, the child is still perceived as a second-class citizen . This perception is based on the disregard for children's fundamental rights, both by the State and by unequal, violent, and objectifying relationships between individuals, including the family environment and other social agents . The notion of childhood is not a fixed or natural concept, but rather a historical, cultural, and socially constructed product over time, as Monarcha argues. The author supports the perception of childhood as a distinct phase of life, with its own characteristics and specific needs, not being innate, but shaped and developed throughout history through social and cultural processes. The conception of childhood, according to this view, is influenced by norms, values, institutions, and educational practices present in a given society at a given historical moment. In the period analyzed between 1875 and 1938, there was a gradual transition of ideas about childhood, when it went from a more utilitarian vision to a more humanistic perspective concerned with the education and child development . By stating that childhood is a cultural and social construction, Monarcha highlights that perceptions, educational practices, ideals, and representations about childhood are shaped by the culture, social values , and history of a specific society. This implies that notions about childhood can vary over time and in different cultural contexts, being influenced by social, economic, political, and historical factors. This critical understanding of childhood as a culturally situated phenomenon is crucial to analyzing and understanding the transformations in forms of education, public policies aimed at children, and social representations of childhood throughout history. The very etymology of the word "infância" echoes the historical characteristics of incapacity and inferiority attributed to children. Originating from the Latin infantia, derived from the verb fari , the word infans refers to the idea of "one who does not speak" . In this context, a striking example is the childfree movement, growing not only in Brazil but in various parts of the world. This movement challenges conventional norms about parenting and questions the supposed obligation to have children, raising discussions about the rights and social perceptions of children and young people. The childfree movement is directly related to the understanding of childhood as a cultural and social construction. This perspective suggests that the decision not to have children, advocated by the movement, is influenced by a series of social, cultural, and individual factors that shape the perception of childhood and parenthood. Deepening this reflection, the next topics in this article will address the history of foster care around the world, the analysis of the childfree movement, and intersectionality in the childhood experience, offering a comprehensive view of the complex dynamics that shape the experience of children and adolescents in contemporary society. --- HISTORY OF FOSTER CARE AROUND THE WORLD The first foster care initiatives as a public policy date back to the 19th century, with records in the United States and Canada . One of the first references dates back to 1909, during the 1st "White House Conference on the Care of Dependent Children,", held at the White House, where foster care was officially recognized as the best substitute home alternative . In the 1940s, Great Britain began to implement foster care following a recommendation from the 1946 Curtis Commission, which advocated that "children removed from their homes should live in conditions as similar as possible to a family environment." , to the detriment of collective shelters. Subsequently, in the 1950s, Israel adopted the foster family model to deal with the large number of orphaned children resulting from the Second World War . From the 1970s onwards, other European countries began to implement family care, such as France, Spain, Italy, and Portugal . During the same period, in 1979, the United Nations proclaimed the International Year of the Child. During that year, the United Kingdom's National Foster Care Association organized an international conference in Oxford, where it was proposed to create an entity to promote foster care globally. In response, the International Foster Care Organization was founded in 1981, with headquarters in the United Kingdom . Currently, foster families is the most common form of foster care in many countries, notably Australia, Ireland, Norway and the United Kingdom, where more than 80% of children and adolescents in foster care are cared for by foster families , as shown in the graph below. In Brazil, this percentage is just 6%, according to data from the latest SUAS Census, carried out in 2022 . Graph 1 -Use of Family Care in Different Countries Source: prepared by the author , with data from 2010-2012 from Valle and Bravo , Darcanchy and MDS --- CHILDFREE MOVEMENT From the 1980s onwards, with the introduction of new contraceptive methods and changes in sexual norms, feminist movements began to bring the option of non-motherhood into public debate, challenging the previously predominant view that the absence of parenthood was exclusively associated with sterility. However, those who make this choice have historically been stigmatized and often labeled as selfish, unhappy, irresponsible, immature, or abnormal . As a way of challenging this social prejudice, the "childless-by-choice" movement emerged, which can be translated as "sem filhos por opção". Later, the term "childfree" also began to be used, which can be understood as both "childless" and "livre de crianças". Childfree, therefore, began as a social movement to question this compulsory motherhood and to demand greater respect for the individual decision not to have children . As already mentioned, the idea that childhood is a cultural and social construction implies that social expectations regarding parenting and raising children are influenced by norms, values , and cultural representations in a specific society . Thus, the childfree movement challenges this cultural construct by questioning the prevailing narrative that parenting is an essential part of adulthood and that having children is a natural and inevitable choice. Defenders of the childfree movement argue that the decision not to have children can be a legitimate and valid choice, contradicting social norms that place motherhood/fatherhood as an obligation or an unquestionable objective in adult life. This demonstrates how the conception of childhood as a cultural and social construction directly influences individual choices about having or not having children . Furthermore, as Blackstone points out, the movement questions traditional representations of childhood and parenting, often idealized in society. By rejecting the idea that parenthood is the only way to live a full and meaningful life, supporters of the childfree movement question social norms that exclusively praise the importance of motherhood/fatherhood as a form of personal and social fulfillment. In this way, by challenging these cultural and social norms, the childfree movement connects with the critical understanding of childhood as a cultural and social construction, highlighting how perceptions about parenting and childrearing are influenced by broader contextual and cultural factors. Over the years, however, the movement gained new contours, as it was appropriated by people intolerant of the presence of children and adolescents, which generated a market demand for commercial establishments that restrict access to this portion of the population . The expression "childfree" then acquired another meaning, also designating environments frequented only by adults; not because they pose any type of physical, moral, or psychological risk to children, but simply because their presence is undesirable. A recent example of this new trend involved a restaurant in New Jersey, in the United States, which caused controversy on social media by banning entry to children under 10 years old . To justify this type of attitude, the most common complaints are the noise, dirt, and rush caused by little ones. Therefore, to serve a clientele that values "child-free" spaces, numerous restaurants, bars, hotels, inns and other common establishments have prohibited the entry of children and adolescents . In Brazil, with a simple search on accommodation websites in any tourist location, hotels and inns can quickly be identified that require a minimum age for booking, or that expressly declare that they do not accept children. Some use more subtle arguments, such as "the environment is not adapted for children", while others openly admit that the objective is to offer greater peace of mind to customers . Some airlines offer passengers, when booking their flight, an indication of the seats occupied by small children so that those who feel uncomfortable with their proximity can choose the furthest seats (SARCONI, 2019 Additionally, the legal provisions contained in articles 6, item II, and 39, items II and IX, of the CDC, stipulate restrictions on discrimination in access to services. When considering the vulnerability of children and consumers, it is plausible to infer, through a principled analysis, that such provisions prohibit the practice of denying children access to certain places. In this same context, it is relevant to mention Legislative Proposal No. 2,004/2015, currently in progress, whose objective is to add to item XIV of article 39 of the CDC the prohibition of constraining or refusing service to consumers who are accompanied by a child or adolescent. In addition to legal issues, it is necessary to question the moral validity of such behaviors. How can it be considered acceptable for a space for collective use to be mandatorily declared "child-free", ignoring the fact that we are dealing with individuals? Silence in the face of exclusionary and discriminatory practices, such as those supported by the childfree movement, normalizes intolerance and legitimizes the segregation not only of children and adolescents but also of their families. In short, reflection on the exclusion of children from certain spaces goes beyond legal issues, reaching a moral sphere. The childfree movement, by defending exclusionary practices, not only disregards the presence of young individuals but also validates the segregation of their families. This discussion of exclusion and discrimination paves the way to further explore intersectionality, revealing how the interconnectedness of different identities and systems of oppression influences individual experiences in contemporary society. --- INTERSECTIONALITY When examining any type of human rights violation, it is essential to consider the diversity of human groups. The category of children and adolescents should not be seen as homogeneous. If discriminatory attitudes have an impact on children, adolescents, and their families, it is essential to consider the possible repercussions for individuals who are part of minority groups. This raises reflections on the specific effects of these discriminatory attitudes on black children and adolescents. Furthermore, it is crucial to consider the ramifications for children and adolescents who simultaneously face racial and socioeconomic discrimination. Kimberlé Crenshaw developed the theoretical concept of "intersectionality" to designate the overlap of multiple forms of oppression that fall on some individuals. The researcher's initial focus was to understand and denounce how the intersection of race and gender discrimination limits black women's chances of success. It then expands the scope of the analysis by also considering other axes of discrimination, such as class, disability, age, etc. People who find themselves at the intersection of these axes are subjected to much more complex and intense processes of discrimination. The main idea behind intersectionality is that different forms of oppression cannot be examined in isolation, as they are interconnected and intersect, creating unique and complex experiences for individuals who are subject to multiple forms of discrimination. Therefore, Crenshaw's intersectionality highlights the importance of considering the intersections of different identities and systems of oppression for a more complete and accurate understanding of social inequalities and individual experiences in contemporary society. When approaching the topic "race", it is important to clarify that the term is not used in the biological context of human races, which the non-existence has been scientifically proven for a long time. As argued by Dias , the term should be interpreted as "a sociocultural construct with little or no biological basis". Furthermore, Gomes adds that "race" carries a political meaning, considering the social, historical, and cultural aspects that define it. Crenshaw emphasizes that men and women can experience different forms of racism depending on their gender. The goal here is to highlight that black children also face racism in different and possibly more severe ways, due to their unique perception of human interactions and understanding of the world. Additionally, these painful experiences are internalized in their bodies and minds, accompanying them throughout their lives. There is a phrase that is often repeated at events about the rights of children and adolescents: "Childhood is the ground you walk on your entire life"1 . It is a metaphor to remember that the experiences of this period reverberate throughout adult life, constituting not just a memory, but a fundamental part of being. It is a serious error, therefore, to neglect or underestimate children's experiences. On the contrary, medicine and neuroscience have exhaustively proven the strong impact of learning from the first years of life, which psychology has also demonstrated for a long time. Regarding the knowledge produced by medicine in conjunction with social sciences, Harvard University, through the multidisciplinary Center for the Developing Child, has published relevant studies on the effect of racism on the physical and mental health of black children and adults. Some publications make important contributions to the understanding of toxic stress caused by racial discrimination. According to Shonkoff et al. , in adverse situations, the human body reacts by activating a stress response system, known as "fight or flight", which is an important defense mechanism for survival in extreme occasions. When the "fight or flight" mechanism is activated, the immune system reacts by producing defense cells to fight the invaders, which causes an inflammatory process. In the short term, it is this inflammation that protects the body from illness and injury. However, high levels of inflammation for long periods can compromise the functioning of various organs, which is called "toxic stress". In the context of structural racism, toxic stress arises as a consequence of daily experiences of discrimination, which accumulate, generating an overload on the bodies of black people. In children, toxic stress promotes wear and tear on brain development and other biological systems, increasing the risk of damage to physical and mental health throughout adulthood. As a result, obesity, diabetes, heart disease, depression and even premature births are more likely to occur . From the perspective of psychology and psychoanalysis, the work Tornarse negro , by Neusa Santos Souza , stands out, which represented a milestone in understanding the psychological consequences resulting from experiences of racism since childhood. Souza describes how the "black myth", characterized by negative stereotypes such as ugliness, dirt and irrationality, is internalized in the psychic universe from a young age, leading black individuals to aspire to an ideal of whitenesssocially positive values generally associated with white peopleboth for to assert themselves and to distance themselves from this representation. It is important to note that, as pointed out by Dias , the processes that underlie stereotypes in the collective imagination are not always explicit or recognized. Even though reactions that imprison black people in a web of prejudice and discrimination are often contained, in the name of good social coexistence, they appear in actions, conscious or not, that repel nonwhites, that is, those who do not present on your body the marks responsible for the limits between what is acceptable and what is unacceptable. Souza brings yet another possibility to understand the issue, through understanding the Ego Ideal from the black perspective. For the author, the Ego Ideal is "a model from which the individual can constitute himself", created from the "idealizations of parents, substitutes and collective ideals". The closer the individual can get to the Ego Ideal, the greater their tranquility and internal harmony will be. On the other hand, the distance from this ideal causes feelings of guilt and inferiority. The suffering of black people, therefore, comes from the construction of a white Ego Ideal, since "they are born and survive immersed in an ideology that is imposed on them by white people as an ideal to be achieved and which endorses the struggle to achieve this model" . In this incessant search for adaptation to a white society, according to the same author, black people lose the right to spontaneity, as they are always on alert, in a defense position, having to impose themselves all the time to avoid attacks and discrimination. This permanent waking position causes fatigue, as mentioned in some reports in the book, which coincides with the brain wear mentioned from a medical point of view, by Shonkoff et al. , especially in the child's brain. Given all of the above, there is no way to ignore that inequality affects black children much more heavily than it does for white children. Returning to the concept of intersectionality proposed by Crenshaw , here there is clearly an intersection of two axes of discrimination: childhood and race. Advancing in the analysis of intersectionality, another axis of discrimination emerges, always closely linked to race: social class. It is possible to draw on the thinking of Lélia Gonzalez to reflect on the marginalization imposed on black and poor children in society. Gonzalez points out the need to combat the psychological conditioning that naturalizes the racial division of spaces. By critically resuming the theory of the "natural place" proposed by Aristotle, the author denounces that precarious housing or prisons are designated as natural places for black people. This highlights both the profound economic inequality between blacks and whites, evident in the color of the individuals who inhabit residential spaces, and the racist police violence intended to frighten and subjugate black people. As for black children and adolescents, it is suggested to include shelters and the socio-educational system in the "natural place" category. Shelters are part of the institutional care system designed to receive children and adolescents who have been removed from their families due to rights violations, such as violence, neglect, or abandonment. The socio-educational system is aimed at teenagers in conflict with the law. Both spaces are shrouded in stigma and prejudice and are mostly occupied by black and poor children. The study in question focuses especially on shelters. In Brazil, 64.3% of children and adolescents in foster care are black or mixed race , a significantly higher proportion than the general Brazilian population, which is 56.3% black and mixed race . This highlights the intersectionality between race and childhood. It is not possible to add class intersectionality due to the scarcity of supporting data, although it is widely recognized that the protective measures determined by the rights guarantee system affect poor families more severely. According to the ECA The predominance of black children and adolescents among those available for adoption and the preference for white children on the part of adoptive parents corroborate the view of Teixeira , who points to racism as an obstacle to interracial adoptions, as the majority of Black children are adopted by white families. The table reflects the impact of stereotypes and stigmas linked to black people in adoption procedures. The idea that being black is synonymous with ugliness, inability to study, moral misconduct or incapacity for intellectual activities, for example, stigmatizes black children and adolescents, concretely influencing candidates for adoption. . To exemplify intersectionality, Crenshaw cites human trafficking, which is highly debated in the context of human rights. In the author's words: "Not all women are subject to trafficking. [...] The victims tend to be socially marginalized women, those who are unable to compete adequately in the market". Likewise, not all children are subject to living in shelters, just as not all adolescents are subject to being confined in the socio-educational system. Despite the exceptions that only confirm the rule, the overwhelming majority of children and adolescents in these environments come from poor and/or black families, making the intersectionality of race, class, and childhood clear. Thus, intersectionality stands out here when addressing the situation of children and adolescents in shelters and in the socio-educational system, comparing it to the human trafficking mentioned by Crenshaw . In the same way that not all women are subject to trafficking, not all children are destined to live in shelters or be included in the socio-educational system. However, the majority of children and adolescents in these situations come from poor and/or black families. This demonstrates intersectionality, highlighting how racial and socioeconomic issues intertwine, disproportionately impacting certain groups of young people. This analysis can be related to the childfree movement, by highlighting the structural disparities faced by these groups, which are often not considered in debates about parenting. Concerning the history of foster care around the world, the observation of the predominance of children from poor and/or black families in shelters resonates with the historical evolution of this system. This indicates how reception policies often reflect and perpetuate social and racial inequalities, pointing to the need for reforms in the care system for children and adolescents. This connection illustrates how discussions about the childfree movement can intertwine with historical and contemporary issues related to foster care systems, highlighting the importance of considering a broader perspective on the experiences of childhood and families. --- FINAL CONSIDERATIONS According to the interdisciplinary analysis of the commercial practices of childfree sites and the growing intolerance towards the presence of children, it Thus, by following an approach that combines critical thinking with practical actions, we advance in the search for the inclusion of children and adolescents in social and leisure spaces, reducing their segregation in dominating environments such as shelters and socio-educational systems. The aim is also to promote the protagonism of these young people in different spheres of society, allowing the appreciation of their forms of expression and, especially, the freedom to explore the world with all the characteristics that wonderfully define childhood.
This article addresses the condition of the rights of children and adolescents in Brazil, emphasizing that despite the progress made after the promulgation of the 1988 Constitution, there is still a permanent subordination of these groups to unequal power relations. Despite the rights acquired through extensive social and political movements, there is a persistent demeaning that subjects them to objectifying relationships. The study aims to expand this analysis by investigating other forms of inequality that impact children and adolescents, with special attention to variables of race and socio-economic strata. However, a significant challenge in this research is to avoid the generalization of experiences, taking into account the regional particularities and specific contexts of these groups. The simplification of these experiences may result in a superficial analysis, incapable of capturing the complexity of intersections between race, socio-economic strata, and other identities. The objective is to achieve a deeper and contextualized understanding of the interrelations that impact the lives and rights of these young individuals in Brazil.
Introduction 1.1 Background to the Study Sustainability remains at the forefront of the discussion of birth control despite the ongoing arguments made by academics that starting a business with the sole goal of social profit is "madness," as the fundamental purpose of business is financial profit or both financial profit and social profit . But how can we better comprehend the contradiction in the birth control market and the reasons behind contraception companies' decision to choose a particular business strategy in a time marked by sustainability? . Consequently, the family as a spending and policymaking unit is a crucial marvel in advertising and purchaser conduct . While people are projected to change their conceptive activities to decelerate populace increment, unfortunately this is not always the case . In this paper, we re-evaluate the existent writing to demonstrate that several significant and surprising examination inquiries remain unaddressed. Although companies hold divergent views on marketing approaches to execute successful and broad utilization of contraception medication, over the long run, most examinations have zeroed in on mass promotion to deal with conception prevention. Researchers have given responses to questions encompassing the financial and segment conditions that rouse couples to oversee their fertility . Edwards and Stewart indicated, through their gender impact investigation of contraception medication utilization, that gender focal point is basic to delivering a gender equitable strategy on family arranging. Gender focal point refers to greater consideration of gender equality issues in daily operations. Bandura opined that a country's foreign capital investment in the long term depends on how evenly it applies its female and male-directed family planning programs. Will marketing mix components in cause-driven organisations remain on the periphery, or will social entrepreneurs and social marketers accept the idea of a mixed marketing strategy to address its social impact agendas? So, it is important to comprehend the marketing-related characteristics and components that motivate social change behaviours. Therefore, in this examination, we evaluate the result of the masculinization of contraception medication marketing and utilization. --- Statement of Problem The best strategy to slow down population growth in the Global South remains unknown. One question is whether a masculinized family planning marketing approach will induce consumption in men . The imbalance in the consumption of effective birth control calls for grave concern . In the wake of the global climate and economic crises , and in addition to appropriate Nigeria's attractiveness to investors in the extended length, pivots on how evenly it applies its feminization and masculinization programs on family planning. --- Objectives of the study This study identifies what gender marketing methods reinforce birth control consumption in men. It determines if men would consume birth control marketed for men specifically, or if endorsed by a male celebrity, and examines if men will consume birth control more than women if the male gender is predominant in the marketing of family planning. --- Research Questions i. What gender reinforces birth control consumption in men? ii. Should men consume birth control? iii. Will men consume birth control marketed for men specifically? iv. Will men consume birth control for men if a male celebrity endorses it? v. Will men consume birth control more than women if the male gender is predominant in marketing of family planning? --- Research Hypothesis H0: masculinization of birth control marketing induces consumption in men --- Significance of the study There has been little research done on how vasectomy and other more effective birth control techniques for men can be reinforced in Nigeria to increase family planning. --- Scope of the Study This study investigates gender marketing and family planning consumption in Nigeria. It mainly focuses on specific marketing mix elements that are prominent in literature published in reputable online scholarly journals from the 1900s to 2021 . The World Bank Group projects slow poverty reduction in Western Sub-Saharan Africa between 2021 and 2030, but a focus in health and training remains crucial . --- 2. Review of Literature --- Preamble In accordance with Tifferet and Herstein , gender is the most well-known type of division utilized by marketers ). --- Theoretical and Conceptual Framework Given the multiple definitions of "gender marketing," a unified underpinning theory is difficult to achieve . Gender theory proposes looking at masculinity and femininity as sets of mutually created characteristics shaping the lives of men and women, marking a significant shift from the days when 'gender' was synonymous with women . Labelling theory provides an opportunity for identity-based labelling or "identity appeals" where invoking an identity can motivate individuals to conform to characteristics of that identity . Gender marketing for this study is shaped by the attributes of the customer and the birth control methods. . Therefore, we posit that masculinization of birth control marketing induces consumption in men. --- Empirical Literature According to Open PR , gender marketing alludes to a marketing procedure that inspects the activities of designated sex and uses systems to engage the designated sex. Despite global forums for consigned females, their privileges are routinely spurned. 63% of rural women in low-paid positions in families ignore the advantages of effective birth control . Financial experts have fought about the causes of this trend, which range from marginal changes in behavior to a shifting balance of the costs and benefits of having various numbers of children in low-income economies . Researchers more often use the term "conception prevention" to refer to the family organizing technique which combines modern and traditional approaches to regulate or end childbearing . The typical contraception approaches in Sub-Saharan Africa include, but are not limited to, male condoms, musicality, female sterilisation, vasectomy, IUD, implant, injections, pills, and withdrawal . According to Tone , during the 1920s economic crisis, when consumers' purchasing power was low, a McCall's magazine Lysol advertisement portrayed the typical fears of women about losing their youth due to childbearing and heightened their feelings of dread. This led to an increase in the use of contraceptives among women who watched it. Although contraception has always been important for societies, birth control has consistently been portrayed as a necessity for women by manufacturers who understood the impact of the advertisement and missions in marketing . Despite the strict legal requirements and the economic depression, the conception prevention sector was profitable. Due to the widespread hardship in the 1930s, U.S. entrepreneurs advised families to limit the number of children they bore. The success of contraception is reflected in annual sales which exceeded $250 million by 1938. Fortune magazine described contraceptive drugs as possibly the most successful new industry of the decade . Since then, the contraception medicine market has continued to grow. However, a third worldview arose during the 1950s, as the number of inhabitants in the Global South was snowballing and that for individuals in low-income nations to decrease their fertility; yet birth control methods were still largely dependent on women without much success . In Sub-Saharan Africa, 76% of women report that modern methods have satisfied their need for family planning, leaving 24% of the population unaccounted for . The question of how to effectively market contraception methods remains unanswered despite the proliferation of contraception drugs. In comparison to men, women continue to face barriers to employment, including business access, employment choice, working conditions, wage equality, and balancing work and family responsibilities . Green and Cunningham , forerunners in gender roles direction, examined this variable in relation to advertising and consumer behaviour to determine the consistency of its relative impact. They investigated its effects on female career directions and discovered that "the wife's" verbally stated position had a higher impact on behavior than a traditionally female job direction. They further examined differences in family dynamics among various types of families and identified higher-pay groups and younger couples as more sensitive to traditionally female work distinctions among spouses. They also discovered that husbands expressed fewer independent choices when their wives had unconventional attitudes towards traditionally female employment. According to Schultz , as the cost of conception prevention decreases, people are expected to have fewer undesired births. This method was previously disregarded but may now be worth considering due to changes in desired levels of fertility. According to reports, there is a decline in the use of sterilizing, IUD, and conventional methods ; Sub-Saharan Africa is the only region where injectables are the predominate method, with a prevalence of 9.6% among women of fertile age and the fastest decline in the use of contraception medication methods . Male sterilization has not been documented, even though female sterilization is the most often used method of contraception globally, according to UNDESA . However, the organization also noted that only 2% of Nigerian women employed this method. This could be a reminder of the events in 1933 when merchants distributed and marketed goods for "female cleanliness" in large quantities to women, preying on their eagerness to consume contraceptive methods . By the 1940s, contraception drug commercialization had become a significant industry, changing the way commerce was conducted . Fertility control existed before the economic slump, but manufacturers had the option of feminizing the provision of a contraception drug technology and popularizing it. This was determined by the maker's choice rather than the buyer's requirements . Post-frontier Ghana used a similar strategy. The National Family Planning Program's [NFPP] goal was to lower the birth-rate and bring population growth to a level suitable for the public turn of events . Despite progress, there were only 46 clinics in May 1970, 63 in December 1970, and 80 in March 1971. In April 1971, 39,858 people had visited family planning centres across Ghana, indicating that the number significantly expanded over the course of nearly a year. By November 1972, 139 organizations had signed up with the NFPP. Nonetheless, more women than men travelled to Korle Bu Medical Clinic, the primary location for family planning, from where it was located outside of Accra. Despite limited transportation, the women marched out in large numbers . Family arranging programs were officially launched in China in 1971 after the State Council of China approved the report which laid out the family arranging strategy. These programs are most notable for the fourth Five-Year Plan, which ran from 1971 to 1975 . Soon, offices for family planning were established at all levels of regulation, and work to prevent conception was carried out similarly to the NFPP . The objective populace of the NFPP is China's largest ethnic group, the Han, which represents 93% of the populace. Women again were the objective of these family arranging efforts. The prevalent technique for contraception at that time was the IUD . Other techniques used included tubal ligation, male sterilization , and early abortions under certain conditions. In March 1978, China adopted a stricter strategy known as the One Child Policy. Then, in 1982, the family arranging strategy became flexible to participants and permitted families that fulfilled certain conditions to have a subsequent child. The critical interaction between population, reproductive health, gender roles, and climate change cannot be removed. These studies reveal that traditional functional gender roles are still observed in non-traditional couples, establishing that a gender focal point is critical to produce gender-equal policy for birth control consumption through critical interchange between policymakers and researchers . As elaborated in the findings of Steinfield et al. , they uncover a misalignment of social marketing attempts and gender injustices . --- Research Conceptual Model --- 2.5 --- Gap in Literature Questions about how we can initiate utilization of vasectomy, and other underutilized conception prevention for men over contraception medication for women remains unanswered . While studies have openly laid the foundation on gender outlooks in consumerism, there is still room for progress . --- Methodology --- Preamble The hypothesis formulated in section 1.5, H0: masculinization of birth control marketing induces consumption in men, will be addressed using the research questions in section 1.4. --- Research Design This study employs a descriptive research design which uses both qualitative and quantitative approaches. --- Population and Sample size The total consumer strength of family planning methods is approximately 922 million . Contrasting approaches to determining the sample size exist . The respondents were a readily accessible cluster of birth control consumers observed by the researchers, doctoral candidates within the Department of Business Administration at the University of Lagos. Bryman and Bell assert that there are various methods for calculating sample size. For this study, we used a purposive examining approach with a sample size of 36. --- Data collection and analysis A camouflaged inquiry form within Section A consisted of closed-ended and multiple-choice response structures for gathering demographic data from the respondents. Section B sought data about the possible masculinization of birth control marketing and consumption using the Likert scale of 1 to 5 . Section A we analyzed using graphs and B using ANOVA on IBM Statistical Package for Social Sciences . --- Pilot Study and Assessment of Reliability and Validity The pilot study included drafting an initial questionnaire for distribution that contained multiple choice questions of twelve items using the Likert scale of 1 to 5 and using a North American population within the researchers reach. This enabled the researchers to keep items that were relevant to the study, providing a greater sense of confidence . The instrument was validated through content analysis consisting of relevant variables on gender marketing and consumption of birth control discussed in section 1.7 with an alpha value of 0.655 deemed acceptable . --- Model Specification The basic mathematical expression of the model as discussed in section 2.4 is as follows: MODEL 1 Y= b0 + b1M + b2Fi + Ɛi Where Y = Should men consume birth control b0 = Constant / intercept b1M = Male b2Fi = Female Ɛi = Error term . MODEL 2 Y= b0MCBCMKTS + b1MCBCMEMC + b2MCBCMTW + Ɛi Where Y = SMCBCi b0MCBCMKTS = Constant / intercept b1MCBCMEMC = men will consume birth control for men if a male celebrity endorses it. b2MCBCMTW = men will consume birth control more than women if the male gender is predominant in marketing of family planning. Ɛi = Error term . Y is the outcome variable, X is the predictor variables, and e is the accidental error term. The b0, b1, b2…, bn are the regression coefficients that represent, on average, the amount the dependent variable y changes when the corresponding independent variable changes one unit . The standardized version of the beta coefficients are the beta weights, and the ratio of the beta coefficients is the ratio of the relative predictive power of the independent variables. According to Field , dummy table applies when using Analysis of Variance to analyse data to reduce errors. Therefore, a post-hoc analysis is not important once dummy coding surfaces in the test of hypothesis. Regression coefficients were to evaluate the strength of the relationship between the independent variables and the dependent variable . --- 4. --- Data Presentation, Analysis, and Interpretations --- Response rate A total number of 36 instruments were returned revealing a response rate after data editing process of 83.33%. That is only 30 were properly filled and found valid for use in the data analysis. --- Demographic data This section presents the demographic analysis of respondents by gender, age, birth control consumption and gender marketing approaches using pie charts, histograms, and a simple error bar chart. Figure 1 reveals that there were 14 male respondents and 16 female respondents, which signifies that most respondents were females. Figure 2 reveals there were 4 respondents between the ages of 20 but less than 30 years old, 12 respondents who were at least 30 but less than 40 years old, 10 respondents who were at least 40 but less than 50 years old, and were 4 respondents who were 50 and above years old. This implies that most consumers of birth control are between the age group of 30 -39 years old. Figure 3 reveals that 16 respondents used condoms , and 14 were not recommended by a physician while 2 were recommended by a physician. Only three respondents used IUDs although all three did so based on a physician's recommendation. Additionally, none reported using vasectomy as a form of birth control. Eight respondents were consumers of two or more combinations of either condoms, IUDs & pills with 1 revealing that a physician did not recommend it, while the remaining 7 revealed that their consumption of two or more combinations of birth control was instructed by a physician. These data reveals that most consumers of birth control are compelled by gendered marketing approaches. Figure 4 is a simple error bar chart that reveals a linear relationship between female and male genders at 95% confidence levels. It also reveals that the frequency of respondents who strongly agreed that using the male gender alone to target male birth control consumption is more effective. It was also more statistically significant with an error value from the mean that is smaller relative to the error value from the mean for female and x genders and their frequency all together. --- Analysis of data according to research objectives --- Analysis of data according to research objectives I TABLE 1: Analysis of Variance Consequently, from Model 1 Y= b0 + b1M + b2Fi + Ɛi, results reveal the following: ✓ There was a significant positive linear trend and effect of male gender used to target men on levels of birth control consumption, F = 7.7, p = 0.002, ω = 0.56. ✓ Planned contrasts revealed that having a male in the marketing mix significantly increased consumption in men compared to having just control, t , p = 0.001, r = 0.32, but having D1 = 4.789, p = 0.038 ✓ There was a significant linear trend of birth control endorsed by a male celebrity determinant on levels of birth control consumption in men, F = 5.415, p = 0.028 ✓ There is no significant linear trend of birth control consumption more than women when male gender is predominant in marketing determinant on levels of birth control consumption in men, F = 2.396, p = 0.134 Therefore, Hypothesis H0 accepted under 5% level of significance that masculinization of birth control marketing induces consumption in men. --- Analysis of data according to research objectives II -V --- Discussion of Findings The demographic data reveals that vasectomy, a method of birth control for men, is not a family planning option nor consumed by any of the male respondents which further corroborates the report of UNDESA that in Nigeria only women undergo sterilization. Also men who participated in this research strongly believe that birth control should be consumed by men and that such consumption be reinforced by utilizing male marketers as can be seen in the Analysis of Variance for each predictor with male birth control endorsed by a male celebrity showing a stronger significance, F = 5.415, p = 0.028. Other academics have also proposed that decisions should be seen through the prism of a business sustainability model . Despite the unending debates in the birth control market over optimizing a business strategy in the wake of the green and economic crises and a time characterized by sustainability promoters, our results suggest that a business sustainability model may produce better results. --- Summary of Findings, Conclusions and Recommendations The challenges associated with depopulation, the conceptualization of gendered marketing strategies, and gender theory are major issues that are highlighted by this study. The literature reviewed reveals the importance of regulating population towards a greener economy . While gendered approaches to marketing of female birth control products were effective 88 years ago, the same cannot be said for today . Hence, masculinization of birth control marketing is feasible if marketers increased male pedagogies to reinforce consumption of birth control in men. Therefore, we recommend that masculinization targeted at the marketing mix elements combined will induce consumption of birth control in males. However, a stronger significant relationship exists between male celebrity endorsement and birth control consumption in men. In view of this, until a prevailing interchange happens between birth control manufacturers, social marketing attempts targeted at gender injustices of reproductive health rights is likely to remain futile . This finding corroborates, Arsel et al. who posit sex roles to be manipulative and ethically questionable. According to this study, an invaluable contribution to knowledge towards a balanced gendered marketing approach for a sustainable population and planet is eminent. --- Suggestion for Further Studies Future studies to be conducted across social marketing efforts by above the line leadership in the birth control industry. This study is not exempt from its own limitations and that includes the fragmentation of gender marketing, operational cost, lack of cultural diversity. We conducted this research using personal funds.
Questions remain about how we can initiate use of vasectomy and other underutilized birth-control methods for men over contraception medication for women. This paper's analysis of sex promotion in Nigeria, utilizing the indicators continually featured in the extant literature, uncovers the result of the masculinization of contraception medication marketing and use. People are projected to change their conceptive activities to decelerate populace increase, but the converse is also true. How can we better comprehend the contradiction in the birth control market and the reasons behind birth control manufacturers' decision to choose a particular business strategy in a time marked by sustainability champions? Although companies hold divergent views on marketing approaches for successful and broad utilization of contraception medication, over the long run, most have zeroed in on mass promotion. Researchers have identified the financial and segment conditions that rouse couples to oversee their fertility, but this study examines gender marketing and family planning methods. We hypothesize that masculinization of birth control marketing induces consumption in men. We adopted the Likert scale of 1 (strongly disagree) to 5 (strongly agree) for the data collection on birth control consumption. Data analysis relied on the use of graphs and ANOVA. This study substantiates that masculinization of the marketing mix elements (combined stakeholder engagement, mass marketing, celebrity endorsement, & communications) will induce birth control consumption in males. This discovery is an invaluable contribution to knowledge in both theory and practice.Ms. Afobunor A. N is an advocate for the rights of women and girls. She has spent the last 15 years working with grassroot organizations to bridge the gender divide by providing expert advice and solutions on advancing the Sustainable Development Goals. She currently serves on the Strategy and Operations team at SDSN Youth in New York, New York. My experience has provided me the opportunity to witness the dynamics in organizational structures and the gender-power imbalance in households which underscores my commitment to the interdisciplinary field of social marketing and special interest in family consumption patterns. Future studies will analyze social marketing efforts by above-the-line leadership in the birth control industry.
Introduction The term food insecurity, which refers to all aspects of food and nutrition insufficiency, insecurity, and hunger describes an inadequate quality and/or quantity of food at the household, adult and/or child levels, and is a critical problem in the United States [1][2][3][4][5][6][7][8][9][10]. Prior work establishes the relationship between food insecurity and poor physical and mental health [8], or as a significant predictor of chronic illness and adverse physical and mental health outcomes in adults [11]. Food insecurity among children is associated with diminished nutritional status, poor academic performance, health-related quality of life, and developmental problems [12][13][14][15][16]. Outcomes related to poor nutrition affect a substantial number of Latino children, who are more likely than African American or white children to have mental and oral health problems, and high rates of overweight and obesity [17]. Furthermore, the prevention and management of nutrition-related health problems, such as obesity, diabetes, and cardiovascular disease, are complicated by food insecurity [18][19][20][21][22][23][24]. Prior to 2006, household food security status was described as "food secure", "food insecure without hunger", and "food insecure with hunger" [23,25]. In 2006, "food insecure without hunger" was changed to "low food security" and "food insecure with hunger" became "very low food security" [23]. Nationwide, the prevalence of food insecurity in 2009 was 14.7% of households, 16.6% of individuals living in food insecure households, 21.3% of households with children, and 11.8% of households with food insecure children [23]. Most food insecure households occasionally experienced diminished food supplies; however, one-fourth of food insecure households and one-third of households with very low food security experienced frequent or chronic food insecurity, such as running out of food every month [23]. National surveys, such as the 1999 Current Population Survey , 2009 CPS, and NHANES III have consistently found that Hispanic/ Latino households were at the greatest risk for food insecurity [23,[26][27][28][29]. Subgroup analyses from the USDA supplement to the CPS revealed that rates of food insecurity were higher in Hispanic households than in African American and non-Hispanic white households [23]. For Hispanic households with food insecure children or with very low food security among children, the prevalence in 2009 was 18.7% and 2.5%, respectively. This rate was two percentage points greater than African American households with food insecure children and 2.8 times larger than the 7.6% of non-Hispanic white households with food insecure children [23]. Since 1996, the two-year national average for prevalence of food insecurity and very low food security increased from 11.3 in 1996-1998 to 13.5% in 2007-2009; at the same time, the prevalence in Texas, which was significantly greater than the national average, increased from 15.2% to 17.4% [23]. According to 2009 estimates, persons of Hispanic origin comprised 15.8% of the U.S. total population and 36.9% of the population in Texas, which has the second largest percentage and number of Hispanic residents [30]. In Texas, the largest county-level percent of persons of Hispanic or Latino origin is along the Texas border with Mexico, where the percent exceeds 86% in each county [31]. The Texas border region is characterized by a Hispanic majority population, and above average number of Mexicanborn immigrants. In this setting, residents are not as likely to have to choose between American and Mexican values, and most residents are Spanish-speakers [32,33]. The Texas-Mexico border region is one of the fastest growing areas of the United States, and estimates predict a doubling of the predominately Spanish-speaking population by 2025 [34]. Demands for low-cost housing along the Texas-Mexico border have resulted in the development of more than 2,294 colonias, a Spanish term that describes unincorporated settlements, neighborhoods, and communities, many lacking basic infrastructure such as paved roads, running water, or sewage [35,36]. In 2008, the population inhabiting Texas colonias was approximately 400,000 [36]. The burden of obesity and nutrition-related health conditions disproportionately affects marginalized populations that face increased vulnerability to food insecurity and poor nutritional health [37]. One, such marginalized population is Mexican-origin families who reside in impoverished colonias along the Texas-Mexico border [20]. Rates of nutrition-related health conditions, such as obesity and diabetes along the border are among the highest in the United States [38]. These families are considered one of the most disadvantaged, hard-toreach minority groups in the United States [18]. In 2006, there were more than 1,786 colonias identified in the six most populous border counties in Texas, with a population of more than 350,000 [39]. Most of Texas' colonias are located in the South Texas border counties of Cameron and Hidalgo , with about 60% of Texas' colonias located in Hidalgo County [40], which suffers from persistent poverty defined by at least 20% of the county falling below the poverty line for the period following the 1970 U.S. Census [41]. There is little published data that provides insights regarding the extent and severity of food insecurity among the hard-to-reach Mexican-origin families who reside in the growing colonias along the Texas border with Mexico [42]. One study of migrant and seasonal farmworkers found 82% with some experience of food insecurity during the previous 12 months [43]. Considering that culture, economics, and elements of the environment may increase the risk for food insecurity and adverse health outcomes, the purpose of this study was to examine data from 610 face-to-face interviews conducted by promotoras in forty-four colonias near the towns of Progreso and La Feria in Hidalgo and Cameron counties along the South Texas border with Mexico to: 1) describe household characteristics and levels of household food insecurity, and 2) examine the relation between household and community characteristics and food insecurity. --- Methods --- --- Data Collection Food security was measured using eleven items from the 12-item Radimer/Cornell measures of hunger and food insecurity that has been used in other Mexican-American populations to assess food anxiety, qualitative, and quantitative components of food insecurity on household, adult, and child levels [9,48,49]. Table 1 shows the four household, four adult, and three child items about which each participant was asked whether this was not true, sometimes true, or often true. Binary variables were constructed as often/sometimes true vs. never true. Four mutually exclusive categories of food security were constructed to represent the four-stage process as household food supplies are exhausted [49]: Food secure households consisted of participants who answered not true to at least two items from each level ; household food insecure individuals answered sometimes/often true to two or more household items and less than two items from adult and child levels; adult food insecure individuals answered sometimes/often true to at least two adult items and less than two child items; and child food insecure individuals responded sometimes/often true to at least two child items. Eating behaviors were measured by self-reported daily servings of fruit, vegetables, sugar-sweetened beverages, beans, and lean protein , weekly frequency of fast-food meals and a regular breakfast meal. Two questions from a validated, self-reported two-item screener were combined to describe fruit and vegetable intake [50,51]. Validated measures from prior community-based work in North Carolina assessed consumption of sugar-sweetened beverages, frequency of fast food meals, and frequency of eating a regular breakfast meal [52][53][54]. Alternative food sources included the purchase of prepared food from neighbors or friends, mobile food vendors, and pulgas . --- Analysis Release 11 of Stata Statistical Software was used for all statistical analyses; p <0.05 was considered statistically significant. Descriptive statistics were estimated for food security items, as well as for demographic characteristics, health characteristics, access and mobility, quality of food environment, eating behaviors, and alternative food sources by food security status. A nonparametric χ 2 test for trend across ordered groups of food security status was performed. A conservative Bonferroni correction was used to reduce Type I error rate for each individual test from 0.05 to 0.002 and 0.001 [55]. Since the four-category dependent variable was not ordinal, multinomial logit was used [56]. A multinomial logit regression model was estimated to determine the association of independent variables with food-security status. Variables for demographic characteristics, food store access, perceived quality of food environment, alternative food sources, and eating behaviors were simultaneously entered; backward elimination strategy was used, which sequentially removed statistically non-significant variables, to obtain the "best" set of independent variables [55]. Adjusted coefficients, SE, and odds ratio are reported. --- Results Table 1 presents frequencies for affirmative responses to each of the household, adult, and child food security items. At the household level, 81% experienced food anxiety , 65% limited quality , and limited quantity . Limited quality at adult level was reported by 61.8% and limited quantity by more than 58%. At the child level, 59% reported limited quality and at least 51% experienced limited quantity . In data not shown in Table 1, 59.5% of households experienced all four items; 49.7% answered affirmatively to all four adult items. Among the households with children, 48.8% responded positively to all three child items. More than three-quarters of participants experienced food insecurity at the level of household, adult, or child; 22.1% of households were classified food secure. The most severe -child food insecurity was reported by almost half of all households and 61.8% of households with children. Table 2 presents demographic and health characteristics by food security status. Most of the participants described themselves as Mexican rather than Mexican-American; 67.7% were born in Mexico; 60% were married, 79.3% had at least one child under the age of 18 residing with them , and most were unemployed . Almost 15% of households with children were single parent. Almost 97% of 455 households who reported income had household incomes at or below 100% FPL; 85.7% at or below 75% FPL. A positive trend across increasing levels of food For the 55% who received SNAP benefits, their monthly benefits lasted fewer days with increasing levels of food insecurity. Thirteen of the trends remained significant after adjusting for multiple comparisons with a revised level of statistical significance Table 3 describes participants' access and mobility, quality of food environment, eating behaviors, and alternative food sources by food security status. On average, participants travelled 10 miles one-way to purchase most of their groceries; only 17 participants shopped for groceries in their town ; 75% of main food stores are a supermarket, supercenter, or mass merchandiser; and almost 63% of participants purchased groceries at least once a week. The use of supermarkets as the main food store declines with increasing levels of food insecurity. Significant difference by food-insecurity level was found for less favorable perceptions of community food resources and food stores utilized, greater weekly consumption of beans and a regular breakfast meal, and less reliance on neighbors, friends, or pulgas for prepared foods. Eleven of the variables remained statistically significant after Bonferroni adjustment overall, 24.9% of participants purchased prepared foods from a neighbor or friend, 29.7% from a mobile food vendor, and 30.7% from a pulga. The main items purchased from mobile food vendors were ice cream , raspas or shaved ice , and elotes or roasted corn on the cob or in a cup . Participants purchased the following food items from the pulgas: fresh fruit and vegetables , aguas frescas or sugar-sweetened fruit waters , raspas, elotes, tacos, Mexican soft drinks, tamales, and menudo . Children 3 2.5 ± 1.4 2.2 ± 1.3 2.4 ± 1.3 2.7 ± 2.2 2.6 ± 1.3** Total 3.9 ± 1.8 3.7 ± 1.7 4.1 ± 1.6 3.1 ± 2.3 4.3 ± 1.7*** † Table 4 shows the characteristics that increased the odds for household, adult, and child food insecurity . Demographic characteristics were independently associated with increasing levels of adult and child food insecurity; namely, being born in Mexico, increasing household composition, household income, and employment. Interestingly, households that did not report an income were more likely to be child food insecure. Participation in federal food assistance programs was associated with lower severity of food insecurity. SNAP participants were more likely to report household food insecure; households where children participated in the NSLP were more likely to be food secure compared with food insecure. Greater distance to the food store where most of groceries were purchased increased the odds for adult food insecurity; items that described perceived quality of the community food environment were associated with household or child food insecurity levels. Interestingly, the odds for adult or child food insecurity were lower for participants who utilized alternative food sources. Households that purchased prepared foods from a neighbor or friend were more likely to be food secure. --- Discussion Healthful nutrition, which depends on a sufficient household food supply, is vital to health in adults and to academic performance and development in children Daily servings of fruit 1.9 ± 0.9 2.0 ± 1.1 1.6 ± 0.7 2.0 ± 0.8 1.9 ± 0.9 Daily servings of vegetables 1.5 ± 0.9 1.6 ± 1.1 1.6 ± 1.0 1.6 ± 0.9 1.5 ± 0.9 [8, [11][12][13][14][15][16][18][19][20]. Considering the importance of access to an adequate quality and quantity of food among disadvantaged populations who may be more at risk for nutrition-related health problems [33,[57][58][59], few studies have focused on the extent of food resource vulnerability among the growing Mexican-origin population [16,49,[60][61][62][63]. Of these, only one examined the extent and correlates of increasing levels of severity of food insecurity among the rapidly growing Mexican-origin population along the Texas border with Mexico [43]. Although there are slight differences between the Radimer/Cornell measure of food insecurity and the Current Population Survey, the emerging picture of food insecurity among hard-to-reach Mexican-origin families suggests greater prevalence of adult and child food insecurity than the previously reported national, regional, and local rates among Hispanic adults and children [16,23,26,49,[60][61][62][63][64]. This study extends our understanding of levels of food insecurity: household, adult, and child [9]. This is the first study, to our knowledge, that examines the relationship between nine components of household and community characteristics to levels of food security status among colonia residents. These components include demographic characteristics, health characteristics, access and mobility, food cost, federal and community food and nutrition assistance programs, perceived quality of the food environment, food security, eating behaviors, and alternative food sources. Our analyses revealed that national data on the prevalence of food insecurity among Hispanic households underestimates the prevalence and severity of food insecurity among Mexican-origin families in border communities. Findings should be considered in context of the high rates of obesity and diabetes prevalent in these areas [33]. The 2009 report on household food insecurity among Hispanic households, which included Hispanics regardless of country of origin , identified 26.9% of households with low or very low food security [23]. Our analyses revealed that 78% of 610 colonia households experience some level of food insecurity. Specifically, data indicated 12.1% of respondents were household food insecure, 16.7% were adult food insecure, and 49% of households were at the most severe level of household food insecurity; that is, households with children who were food insecure. The overall prevalence of food insecurity observed in this study is three times that of the most recent national study [23]. Further, the very high level of severe food insecurity observed is much greater than the 27% observed in a sample of 211 Mexican American families in California [49], the 3.4% observed among 559 low-income Latino women [62], 14.8% among 256 low-income Latino families [61], or 1.6% among Hispanic mothers in northern California [16]. None of these reports appears to describe the more vulnerable Mexican-origin adults or children who Weekly frequency eat 3.2 ± 2.8 2.7 ± 1.2 3.0 ± 1.7 3.8 ± 4.0 3.3 ± 2.9 chicken or fish Weekly frequency of 4.5 ± 2.9 4.2 ± 2.8 3.9 ± 2.7 5.4 ± 2.4 5.4 ± 2.4*** † regular breakfast meal ( reside in border areas. There is one study of 100 migrant and seasonal farm worker families in border areas of Texas and New Mexico that found a similarly high prevalence of food insecurity where 82% experienced some degree of food insecurity and 49% food insecurity with hunger [43]. Although the prevalence of more severe food insecurity in our sample is unacceptably large, it may understate the "true" prevalence among colonia households. Abarca describes a group of working-class Mexican and Mexican-American women residing along the Texas-Mexico border as cooks-as-artists, who demonstrated creativity and culinary expertise in their everyday food practices [65]. For instance, one woman, who did not have a sink and used a one-burner portable stove for cooking, was able to overcome limitations to create delicious meals for her family. It may be that respondents in the present study do not perceive "a lack" of food or other resources because they see themselves as creative agents who are able to provide sufficient food for their families. In addition to 49% of this study's sample living in households with child food insecurity, the findings on socioeconomic disadvantage were disturbing. Unemployment rates were quite high; 60% of male spouses or partners were unemployed, and only14% worked parttime. Almost 15% of households with children were single-parent. Household income was extremely low; 64% reported a household income at or below 75% FPL and only 2.3% reported an income greater than 100% FPL. Food assistance program participation was low, given the very low household incomes; 45% did not receive SNAP benefits, and 46% of households with children did not participate in the National School Lunch Program . These rates are somewhat higher for SNAP participation and lower for NSLP participation than noted in an urban sample of 320 Latinos where 30% reported household food insufficiency; 30% were Food Stamp participants; and 90% of children received school meals [26]. The participation of border colonia households in SNAP and NSLP was lower than noted in the most recent national report on 2009 estimates of household food insecurity in the United States, which combined all households regardless of race/ethnicity, and found that 30.8% of all households with an income less than 130% FPL and food insecure did not receive SNAP benefits in the previous 12 months; and 27.7% of households with an income less than 185% FPL and school-age children in the household did not receive a free or reduced-price lunch in the previous 30 days [23]. The use of alternative food sources, such as sale of prepared foods by neighbors or friends, mobile food vendors, and pulgas , especially in areas without ready access to retail food stores or reliable transportation, is underreported [20,66]. This also is apparently the first study to identify the use of alternative food sources by colonia residents along the border. Overall, 24.9% of the sample purchased prepared foods from a neighbor or friend; 17% among the more food insecure and 45.9% among food secure households. Almost 30% purchased food items from mobile food vendors that marketed in their neighborhood. More than 30% purchased food from pulgas, which are known to sell a wide variety of inexpensive fresh fruit and vegetables and prepared foods [66]. These findings suggest that further research should be conducted on the relationship between acquisition oriented coping strategies and food security. Several additional findings from the adjusted multinomial logit regression model warrant mention. First, the results suggest that several household and community characteristics increased the odds for adult and child food insecurity; namely, being Mexico-born, increasing number of adults and children in the household, income ≤ 100% FPL, and unemployed spouse or partner. Others have linked food insecurity among Hispanics with low household incomes [49,63], minor children in the home and larger households [43], and households occupied by Mexican-born immigrants [43]. In a study of 630 Latino and Asian legal immigrants in urban areas of California, Texas, and Illinois, researchers found the following characteristics associated with being food insecure with hunger: household income below 100% FPL, receipt of food stamps , and being Latino [67]. Although other studies found the perception of diminished variety and quality of foods was associated with lower fruit and vegetable consumption, this is apparently the first study to link these perceptions to food insecurity [47,68]. This suggests that food insecure households in the border colonias face challenges from being located in disadvantaged neighborhoods, where there is limited or non-existent ready access to large supermarkets [20], and the stores that are accessible market a less desirable variety and quality of food items, especially fruit and vegetables [47]. Second, participation in two of the largest federal food and nutrition assistance programs were associated with a lower burden of food insecurity. Although households that participated in SNAP were more likely to be household food insecure compared with food secure, there was no association with adult or child levels of food insecurity. This suggests that greater participation in SNAP may provide enough resources to reduce the severity of food insecurity in this population [69]. However, colonia households with child food insecurity were more likely to exhaust SNAP benefits earlier than other households. With regards to NSLP, participation increased the odds for a household being food secure, compared with adult or child food insecure. As others have observed, there is an apparently large gap between nutritional need and utilized nutrition services [67]. Third, households with adult or child food insecurity were less likely than food secure households to use alternative food sources, such as purchasing prepared food from a neighbor or friend, or from a pulga, perhaps because these households were financially constrained and preferred reciprocity-based food acquisition systems over purchasing food from others. Although we do not have the data to support this, plausible explanations for limited reliance on these alternative food resources could include neighborhood variation in the availability of pulgas and friends and neighbors who sell food from their homes. A family's social capital, their relationship with friends, neighbors, and others within a community, may also impact their ability to access community resources including small food businesses run from neighborhood homes [52,70]. Few colonias are located within walking distance of a pulga, so the availability of transportation may also play a role. Finally, we do not fully understand why households with incomes ≤ 75% FPL are food secure; given very low income and lower participation in SNAP. The creative capacity and expertise observed in the culinary practices of Mexican and Mexican American women and documented by Abarca and Dean and colleagues suggests that economically-constrained women may be able to successfully mitigate challenging circumstances in order to provide sufficient food for their families [65,71]. Another explanation may be reliance on meals from neighbors, friends, or family. In addition, literature highlights how women use food to create and strengthen relationships with other women [65,72]. Extremely low income women may exchange food with female neighbors, friends, or family members as a means to maintain food security in their household. Unfortunately, there were no survey items assessing this type of interaction within social networks. It is worth noting that little literature exists elucidating the strategies that low-income Mexican-origin women use in food choices, much less to overcome hardships associated with food insecurity [73]. There are several major strengths to this study, especially in relation to other studies of food insecurity in Hispanic/Latino populations. This study is one of a few studies that collected data from a largely Mexican-origin region of the United States [19,32,33,43,45]; specifically from two difficult to access border areas that demonstrated high nutritional need. The first is the development of Household and Community Food Resource Assessment survey and data collection approaches in collaboration with team promotoras to consider culture, language, trust, and cognitive demands of Mexican-origin residents who live in border colonias. The second is the delivery of the survey by trained promotoras who are indigenous community health workers, native Spanish-speakers, knowledgeable of the communities, and trusted by colonia residents. As a result, the participant recruitment and survey completion rate was an extremely high 98.5%, which was greater than previously reported in urban border areas [32]. The study has several limitations. Data were not available on acculturation or immigration experiences as identified by others [33], or on documentation status. Documentation status was not asked of participants due to its sensitivity. Another limitation is lack of income data on 25% of participants. Additionally, the cross-sectional nature of the data prevents an examination of causality in severity of household food insecurity. Confirmation of these findings in other border colonia areas is necessary. Finally, the use of the Radimer/Cornell measure of food insecurity limits our ability to compare accurately prevalence with national data. --- Conclusions Despite these limitations, these findings are both timely and indispensible. Currently in the United States, the Mexican-origin population is rapidly expanding; record numbers of individuals and families are experiencing food insecurity nationwide; and for those living in rural or underserved areas such as the colonias, food insecurity is an ongoing reality for many adults and children. The rates of households with adult and child food insecurity in this border area are alarming and among the highest reported. Unfortunately, a large percentage of households that lack quality and quantity of food include children, which is especially troubling given the importance of good nutrition on optimal growth, function, and health [67]. Young children of Mexican immigrant families have a greater risk for hunger and household food insecurity [64], and are less likely to meet dietary recommendations than other children [49,61]. In addition, the population in the colonias is burdened by high rates of diet-related chronic diseases and home to a disparate gap between nutritional need and nutritional resources. Considered together, the results suggest that a large proportion of families living in the colonias are facing adult and child food insecurity and potentially at risk for adverse health outcomes across the life course. This paper therefore provides compelling evidence for enhanced research efforts that will lead to better understanding of coping strategies and the use of federal and community food and nutrition assistance programs for reducing hardship associated with food insecurity. Clearly, systematic and sustained action on federal, state, and community levels is needed to reduce household, adult, and child food insecurity that integrates cultural tailoring of interventions and programs to address food and management skills, multi-sector partnerships and networks, expansion of food and nutrition assistance programs, and enhanced research efforts [10,74]. --- Authors' contributions JRS developed the original idea for the community assessment. JRS and WRD worked on the development of the instrument and the protocol for collection of data. JRS wrote the first draft of the paper. JRS, CMJ, and WRD read and approved the final manuscript. --- Competing interests The authors declare that they have no competing interests.
Background: Food insecurity is a critical problem in the United States and throughout the world. There is little published data that provides insights regarding the extent and severity of food insecurity among the hard-to-reach Mexican-origin families who reside in the growing colonias along the Texas border with Mexico. Considering that culture, economics, and elements of the environment may increase the risk for food insecurity and adverse health outcomes, the purpose of this study was to examine the relation between household and community characteristics and food insecurity. Methods: The study used data from the 2009 Colonia Household and Community Food Resource Assessment (C-HCFRA). The data included 610 face-to-face interviews conducted in Spanish by promotoras (indigenous community health workers) in forty-four randomly-identified colonias near the towns of Progreso and La Feria in Hidalgo and Cameron counties along the Texas border with Mexico. C-HCFRA included demographic characteristics, health characteristics, food access and mobility, food cost, federal and community food and nutrition assistance programs, perceived quality of the food environment, food security, eating behaviors, and alternative food sources. Results: 78% of participants experienced food insecurity at the level of household, adult, or child. The most severe -child food insecurity was reported by 49% of all households and 61.8% of households with children. Increasing levels of food insecurity was associated with being born in Mexico, increasing household composition, decreasing household income, and employment. Participation in federal food assistance programs was associated with reduced severity of food insecurity. Greater distance to their food store and perceived quality of the community food environment increased the odds for food insecurity. Conclusions: The Mexican-origin population is rapidly expanding; record numbers of individuals and families are experiencing food insecurity; and for those living in rural or underserved areas such as the colonias, the worst forms of food insecurity are an ongoing reality. The rates of households with adult and child food insecurity in this border area are alarming and among the highest reported. Clearly, systematic and sustained action on federal, state, and community levels is needed to reduce household, adult, and child food insecurity that integrates cultural tailoring of interventions and programs to address food and management skills, multi-sector partnerships and networks, expansion of food and nutrition assistance programs, and enhanced research efforts.
INTRODUCTION In the United States, only 30% of the 1.2 million people living with HIV have successfully navigated the HIV care continuum and are virally suppressed . Racial disparities in HIV diagnosis and achieving viral suppression are well documented, including differential access to care and treatment . African Americans, compared to other racial/ ethnic groups, are disproportionately infected with HIV, less likely to have access to care and adhere to antiretroviral therapy , and experience higher rates of morbidity and mortality . In Baltimore City, Maryland, the majority of the population is African American and the vast majority of PLHIV, including new HIV cases, are among African Americans . HIV is one of many challenges facing Baltimore, a city with a poverty rate that exceeds the national average, with 24% of residents living below the poverty line . Baltimore City also has high rates of heroin, cocaine and substance abuse; by one estimate 1 in 8 adults needs substance use treatment . Baltimore also has an incarceration rate that is three times the national average and PLHIV with an incarceration history experience gaps in ART and health care access both within prisons and when they reintegrate into communities . It is within this context of high levels of poverty, incarceration and substance use disorders that PLHIV in Baltimore and their informal caregivers are living with and managing HIV and other often highly stigmatized chronic conditions. --- Informal caregiving and HIV outcomes among PLHIV Research has found that among PLHIV, African Americans and persons with a drug use history are less likely to have informal care, and that low-income African Americans provide more labor-intensive forms of informal care compared to other racial/ethnic groups . Informal caregiving is often defined as emotional or instrumental assistance by unpaid partners, family members or friends, to someone with a serious health condition . Caregivers are thought to affect recipients' medical care and treatment adherence through direct instrumental assistance , or indirectly by either promoting routines and norms that facilitate adherence or by buffering the effects of depression, substance use, stress, or other impediments to adherence . A multisite study conducted in the United States of PLHIV who use drugs found that the odds of achieving or maintaining viral suppression was 4.6 times greater among those with informal care compared to those without informal care . Caregiving is often stressful, particularly in highly stigmatized and late stage illnesses, and social and emotional support has been found to be critical to caregivers' well-being and care provision . At the same time, prior research has found that African Americans and caregivers of PLHIV tend to have low levels of social support compared to other racial/ ethnic groups or caregiving populations, potentially due to the challenges faced with stigma, drug use, poverty, and competing caregiving demands . This lack of social support may also be the cause or consequence of caregiver burden: the negative psychological, behavioral, and physiological effects of caregiving on the daily lives of caregivers . The demands of caregiving may interfere with caregivers' family, work and leisure activities, which then leads to strained social relationships, social isolation, depression, and/or other adverse consequences . Women, particularly low-income African American women, are more likely than men to be caregivers and to have multiple caregiving responsibilities, including care for children and other family members . For caregivers, raising youth involved in the legal justice system may compound caregiving burden , especially in communities with high prevalence of drug use, incarceration and family disruption. Furthermore, caregivers' greater number of current drug users in their support networks has been found to be negatively associated with viral suppression among PLHIV care recipients in Baltimore . Moreover, social support networks have the potential to mitigate caregiver burden and stress, and reduce caregiver depression that is often related to the cessation of informal caregiving . On the other hand, caregiver networks may also be a source of stress or obligation that affects the quality and type of HIV care the caregiver is able to provide. It is generally accepted that support networks may directly impact the health and wellbeing of someone living with a chronic illness. These previous studies described above, combined with the literature on social support , however, highlight the potential importance that the caregiver's networks may have on the provision of care by informal caregivers and ultimately the HIV-related health outcomes of PLHIV. --- Synthesis and purpose Caregiving research has historically examined how the individual characteristics of caregivers relate to their own health and/or the health of the care recipient. Few studies have explored how the social and contextual environment, including the support available to caregivers, may be associated with the ART outcomes of drug using PLHIV. Within the context of Baltimore, high rates of poverty, substance use and incarceration likely affect the ART outcomes of PLHIV not only directly, but also indirectly through their effects on their caregivers. The purpose of this analysis is to identify the support network characteristics of caregivers of PLHIV that are associated with the care recipient having an undetectable viral load. Study findings from a predominately African American urban population, contribute unique information on the social environment of HIV caregiving in a population vulnerable to virologic failure. Such data can inform future health promotion programs to support both the caregivers' and care recipients' health outcomes. --- METHODS --- Procedure Data are from the baseline survey of the BEACON study, conducted from 2008-2012. This study examined social environmental factors associated with health outcomes and well-being among disadvantaged PLHIV and their informal caregivers . Care recipients were recruited from both an HIV specialty clinic associated with Johns Hopkins Hospital, as well as through community sampling and street outreach. Inclusion criteria for care recipients were: age of 18 years or older; documented HIV sero-positive status; ; current or former injection drug use; ; current use of ART regimen defined as use in the prior 30 days; and willing to invite at least one main supporter to participate in the study. Main supporters, or caregivers, were eligible to participate if they had provided care recipients with emotional, instrumental, and/or health-related assistance in the prior six months. Caregiver exclusion criterion was providing care to the recipient in a professional capacity. As described in an earlier publication, up to three caregivers were recruited for each care recipient. If a care recipient had more than three caregivers, priority on who to include was determined based on a ranking of the support the caregivers provided . In the case of ties in the ranking of these caregivers priority was given, based on our previous findings of care recipient preferences and sources of intensive caregiving, to the enrollment of main partners, female kin, male kin and friends . For this study, analyses were restricted to recipients' main caregiver. All participants completed baseline and follow-up assessments. Serum viral load, CD4 count data, and toxicology tests were collected for all care recipients. The BEACON study received ethical approval from the Johns Hopkins Bloomberg School of Public Health's Institutional Review Board. --- Measures Outcome-The outcome variable was the care recipients' undetectable plasma HIV viral load using a cut-off of less than 50 copies per mL . --- Independent Variables Caregiver characteristics: Socio-demographic variables included age, sex, race, education, and role relationship to the care-recipient . HIV status of the caregiver was also assessed. Given that all care recipients were HIV positive, the HIV status of the caregiver indicates if the caregiver/care-recipients were sero-concordant or discordant dyads. Depressive symptoms were measured by the Center for Epidemiologic Studies Depression Scale using an established cut point of 16 or more to identify individuals at risk for clinical depression . Physical functioning was a summed score of six items including being able to walk one block, engaging in moderate to vigorous exercise, and needing help with personal care. A higher score indicated worse physical functioning . Caregiver's current drug use was measured by binary items indicating use of at least one of the following illicit drugs in the past 6 months: stimulants, opiates, tranquilizers, heroin, cocaine, hallucinogens. Characteristics of the caregiver's support network: Social network data were collected based on the Arizona Social Support Interview Schedule , eliciting first names and the first initial of the last name of persons perceived available to provide emotional, instrumental, financial, informational, and socialization support. Next, characteristics of each person named were elicited including their sex, age, drug use status, role relation , and frequency of contact. Caregivers' network characteristics included the numbers of support network members perceived emotionally, instrumentally, and financially supportive, if the support network members used illicit drugs in the past year, and the number of support network members who were non-kin. Non-kin members could include any of the following: friends, neighbors, godparents, godchildren, someone at work, the friends of relatives, roommate, and narcotic anonymous and alcoholic anonymous program sponsors. Partners were not included as non-kin members. Emotional support was measured as the number of support network members the caregiver could talk to about things that are personal or private. Frequency of contact with support network members was recorded on an ordinal scale that ranged from 1=less than once a year to 6=every day. Responses were averaged across all the support network members, and treated as a continuous variable in regression analysis. Caregivers were also asked how many youth they currently care for, and if the youth has had any involvement in the criminal justice system . Data Analysis-Care recipients were first matched with a single caregiver who the recipient identified as the most supportive caregiver in his/her network. Data from this care recipient/caregiver dyad were then analyzed. Frequencies and means for the caregiver characteristics were generated for the independent variables, and compared to the outcome of undetectable HIV RNA among the care recipients. Factors marginally statistically significant at the bivariate level were included in a multiple logistic regression model. Two covariates were also included in the model. One covariate assessed the caregiver's own current drug use as drug use among network members may be spurious and conflated with the caregiver's own drug using status. The other covariate was the caregiver's physical functioning limitations, a factor found to be significantly related to care recipient's having an undetectable viral load in a previous analysis . All variables were simultaneously entered into the regression model and then removed one at a time with a stepwise deletion approach until the final model was reached using a significance level of p<.05. Finally, additional analyses tested for statistically significant interactions between variables of interest. All analyses were conducted using SPSS Version 20.0 . --- RESULTS --- Caregiver characteristics Out of the full sample of care recipients , 258 had a caregiver enrolled in the study. Due to missing biomarker data from recipients, 242 Recipient-Caregiver dyads were retained. Among these dyads, 43.0% of the caregivers were kin, 36.8% were the partner/ spouse of the care recipient and 19.0% were friends. The average age of caregivers was 47 years. On average the caregiver had known the care recipient for 19 years . Caregivers were predominately African American , had a minimum of a high school education , and were female . Forty-two percent of caregivers were HIV sero-positive themselves . --- Factors associated with an undetectable viral load among care recipients At the bivariate level, marginally and/or statistically significant factors associated with an undetectable viral load in the care recipient included caregiver's older age, being male and having greater physical functioning impairment. Among caregivers' support network characteristics, marginally and/or significant associations were found between the care recipient's undetectable viral load and the following: more frequent network member contact, greater numbers of support network members who are not kin, having fewer support network members using illicit drugs, and having fewer youth who are involved in the criminal justice system . In adjusted analyses, which retained only significant main effects and two covariates, caregiver's physical functioning limitations and current drug use, four independent variables were significantly associated with care recipients' undetectable viral load . Results indicated that for each additional child a caregiver cares for who is in the criminal justice system, the care recipient had a 68% decreased odds of having an undetectable viral load . Care recipients also had a 50% decreased odds of having an undetectable viral load if the caregiver was female , and a 25% reduction in the odds for every active drug user in the caregiver's support network . A greater number of non-kin in the caregiver's support network, however, increased the odds of the care recipient's having an undetectable viral load . The Nagelkerke R-square value was .123, indicating 12.3% of the variance in viral load was explained by the independent variables in the final model. --- DISCUSSION This study is among the first to examine the role of caregivers' individual and support network characteristics in achieving an undetectable viral load among HIV care recipients. In adjusted analyses, care recipients' odds of having an undetectable viral load was significantly and negatively associated with their caregiver providing care for youth involved in the criminal justice system, being female, and having more current drug users in their network. These data also showed that greater numbers of non-kin members in the support networks of caregivers was positively associated with achieving an undetectable viral load among care recipients. These findings highlight the importance of support network research and provide insights into potential interventions for achieving undetectable viral loads among drug using PLHIV, especially social support interventions among female caregivers raising high risk youth. In communities most impacted by HIV/AIDS and high incarceration rates, caring for youth involved in the criminal justice system is not uncommon and is likely highly stressful to caregivers. Pearlin and colleagues identified two sources of stress that impact mental health among caregivers of PLHIV: primary stressors related directly to caring for someone living with HIV and secondary stressors that encompass the effects of caregiving on other social roles . They posit that primary stressors could generate secondary stressors. The impact of these stresses on the health outcomes of caregivers may vary by context and background conditions and resources, such as caregiver relationship to the recipient and experiences of HIV-related stigma . In the present study, caring for troubled youth, in addition to the HIV sero-positive care recipient, may constitute a potential secondary stressor and is an understudied aspect in the caregiving literature. Caregiving in this context may involve navigating a complex health care system while simultaneously navigating the labyrinth of the judicial system with minimal pertinent resources. It is plausible that caring for youth involved in the criminal justice system impacts care recipients' viral load through several different mechanisms including potential effects on caregivers' mental health, their access to and availability of resources important to their care provision, and/or the quality of the caregiving relationship. Youth involved in the criminal justice system may also be indicative of the degree of disadvantage or drug involvement that exists in the shared network of the caregiver and care recipient. These findings correspond with results from the Gender, Race and Clinical Experience trial that found lower adherence and virologic response among PLHIV who were female and the primary caregivers of children . HIV care and support services do not typically address competing childcare, or any caregiving needs and responsibilities of PLHIV or their caregivers, though they may impede health behaviors and affect poor health outcomes. Our prior findings showed ART adherence among HIV-positive women was lower if they had an HIV-positive partner. This finding suggests that competing caregiving demands in sero-concordant relationships affect women's own adherence behaviors . The present study expands on this evidence to suggest that competing caregiving demands experienced by caregivers may impact their care recipients' HIV outcomes. These findings support the development and testing of family approaches to HIV care and treatment that directly address issues of children and family caregiving responsibilities. Such interventions may be informed by network analyses to identify the multiple roles and kinds of support caregivers have in order to develop and test interventions to address their varied needs that influence their caregiving abilities, e.g. provision of childcare during clinic appointments, training peer supporters to augment existing support networks, and developing innovative strategies to navigate the challenges of drug use, incarceration and poverty. Results also indicated that the majority of caregivers were female and that having a female caregiver was associated with care recipients' detectable viral load. In our prior research, we found that female compared to male caregivers of drug using PLHIV had smaller support networks, which was associated with greater perceived caregiver role overload and depression . Caregiver role overload and depression are both major components of caregiver burden that have been found in other populations to predict the cessation of caregiving . Given the gendered nature of caregiving, it is plausible that females are more likely than males to provide care to severely ill PLHIV. In that case, female caregivers' smaller support networks may be a consequence of their social isolation with more intensive caregiving demands and burden. It is also plausible that female caregivers' limited support networks may in turn impede their caregiving ability. This study also found that having a greater number of non-kin in the caregivers' support networks was positively associated with achieving an undetectable viral load among the HIV-positive care recipients. This finding is consistent with a previous study that found that caregiver contact with friends was significantly associated with the caregiver's reports of greater perceived emotional support and instrumental assistance . These results may be explained by lower affiliative HIV stigma or drug use-related stigma among caregivers' supportive friends who are aware of their caregiving role. In a prior study in this population, disclosure of HIV caregiving was found to be associated with caregivers' lower level of depressive symptoms . Also, the findings may be due to differing norms regarding social exchange by role relation. There is often lower availability and greater costs associated with receiving social support from friends as compared to family members, and friends are generally expected to provide more immediate and similar forms of reciprocated support . As such, caregivers who have more non-kin support may have more resources to develop and maintain relationships of choice. Further research should explore how such findings pertaining to non-kin network members vary with the illness stage of the care recipient and with caregiver burden. In the present study, a greater level of active drug use in the caregiver's support network, independent of the caregiver's own drug use, was associated with reduced odds of having an undetectable viral load among currently or formerly drug using care recipients. In addition, over a third of the caregivers reported current drug use, and substance use among the support network members of these caregivers is likely far higher than in non-drug using HIV and caregiving populations . Persons actively using drugs often violate norms of reciprocity to their social network members, and in a disadvantaged community often place inordinate demands on the limited resources of their network members . Having more drug users in one's support network may impede the emotional and instrumental assistance provided to the recipients' by diverting caregivers' support resources and exacerbating stress and caregiver burden. To support the HIV health outcomes of PLHIV interventions to facilitate entry into addictions treatment should extend to their caregivers and support network members. Ultimately, addressing caregiver burden is important to avoid caregiver burnout and early cessation of care, thereby increasing the likelihood of achieving an undetectable viral load among this vulnerable PLHIV care recipient population . --- Limitations The present study is subject to several limitations. First, data were cross-sectional, which prevents the interpretation of causal relationships among the variables of interest. Therefore, for example, we cannot determine from this study if having a female caregiver leads to a greater odds of detectable viral load, or if poor virologic outcomes lead to having a female primary caregiver. Second, the generalizability of the study is limited because all BEACON care recipients were enrolled in HIV primary care and on ART. Therefore, they may not represent most African American PLHIV, or most individuals with a history of injection drug use. Additionally, analyses only included care recipients who had enrolled a main supporter in the study. Therefore, study design and findings are only applicable to PLHIV who report access to informal caregiving, though prior research suggests that the vast majority of former or current drug using PLHIV report availability of informal HIV care . --- Conclusions Our findings offer novel insight into informal caregiving and provides critical evidence for developing an informal caregiver intervention for the unpaid friends, families and partners of former or current drug using PLHIV. Study findings highlight how PLHIV with a history of injection drug use and their informal caregivers share a unique socio-contextual environment which impact both their informal care and HIV health outcomes. These findings suggest that in this context HIV treatment interventions should engage family and support networks to address the needs of caregivers who care for PLHIV and youth involved in the legal/justice system. Network analysis can inform intervention design to promote informal caregiving relationships and address the social support needs of caregivers, especially among women caring for PLHIV and high risk youths. ---
Informal care receipt is associated with health outcomes among people living with HIV. Less is known about how caregivers' own social support may affect their care recipient's health. We examined associations between network characteristics of informal caregivers and HIV viral suppression among former or current drug using care recipients. We analyzed data from 258 caregiver-recipient dyads from the Beacon study, of whom 89% of caregivers were African American and 59% were female. In adjusted logistic regression analysis, care recipients had lower odds of being virally suppressed if their caregiver was female, was caring for youth involved in the criminal justice system, and had network members who used illicit drugs. Caregivers' greater numbers of non-kin in their support network was positively associated with viral suppression among care recipients. The findings reveal contextual factors affecting ART outcomes and the need for interventions to support caregivers, especially HIV caregiving women with high-risk youth.
Introduction Decades have passed since the onset of the HIV virus. About 33.3 million people are infected or living with HIV, of which 22.5 million are in sub-Saharan Africa. In addition, of the 2.5 million children in the world who are estimated to be living with HIV,2.3 million are in sub-Saharan Africa. Southern Africa, the most affected region, includes a number of middle-and lower-middle-income nations known as the hyperendemic countries [1] . Only in South Africa, there are about 5.7 million people living with HIV/AIDS. About 90% of new infections occur in developing countries, where in some cases the disease has already reduced life expectancy by over 10 years. HIV has become generalized in many nations, and in other poor regions in the world, the disease could be about to spread without control [2] . While the causality between poverty and HIV is not clear, it is certain that HIV pushes households and individuals into poverty. The aim of this study was to evaluate anemia epidemiology in patients with AIDS and its relationship with socioeconomic levels and job situation. study. The project was approved by the hospital´s Ethical Committee and conducted under regulations governing research in human beings. Before commencement of this study, an informed consent was obtained from each patient. Patients were dividied into two groups, i.e. AIDS with anemia and AIDS without anemia. Anemia was defined as haemoglobin <10 g/dL. Data of two groups were compared in the different categories in each studied variable and the difference of proportion test was applied. Inferential statistics based on calculation of probabilities were used and normal distribution was also used because large samples had to be dealt with, and arithmetic mean and SD were used as parameters. Whether there were significant differences at P=0.02 for every observation unit and research variable was established. --- Results Amoung 422 patients with AIDS, 228 suffered from anemia with mean Hb at , 85 of them were men and 143 were women . Mean age of all patients was years, years for men and years for women. The relationship between anemia and socioeconomic, employment status and educational levels was shown in Table 1. --- Discussion Four hundred and forty-four patients with AIDS were studied with anemia detected in 228 patients. When patients with and without anemia were compared, where primary schooling was the highest educational level reached, housing was poor, individuals were jobless or unable to work, income was below 30 d/m, their intake of meat was below 2 days a week and caloric intake was below 800 c/ d, statistical difference was highly significant . Our results show that these 2 different socioeconomiccultural populations have only one thing in common, i.e. an HIV disgnosis. High prevalence of anemia in AIDS patients suggests that poverty increases the risk to suffer from this hematological complication [3] . The strength of the observed association, obtained as a result of the greater significance of P found in several studied variables, allows to infer that the association is real. Other aspects should be taken into account, too: 1) the biological plausibility of the observed association is a meaningful relationship; 2) regarding the biological gradient of the observed association, there is a relationship between the extremes ; and 3) the consistency of our research findings with which is known about the natural history of the disease. In fact, poverty produces not only direct effects but also non financial costs, such as hunger, undernourishment and malnutrition, which contribute to arise feelings of deprivation and frustration, apart from pain, suffering and reduced life quality [4] . Impoverishment, life conditions on the borders of survival and social inequality have a high negative impact on the health of AIDS patients and increase their vulnerability to anemia [5] . Because those acquiring the disease are mainly adults in their most productive period in life, AIDS produces, economically speaking, very severe consequences not only for the affected individual but also for the other family members, namely children, a fact that could further exacerbate poverty and iniquity [6] . No doubt, the human cost of the epidemic is huge. Although arguments in favor of state intervention to face HIV/AIDS epidemic are undeniable, nonobservance of the laws and political interests poses extraordinarily hurdles to the application of AIDS policies [7] . No doubt, the advantages of state intervetion are bigger because the potential seriousness of the problem has not become patently obvious [8] . Governments are bound to support and subsidize prevention campaigns in order to reduce risk, especially among those who are more exposed. However, in Argentina, health policy makers have shown to be somehow reluctant to get involved. Thus, in these recent 10 years of democracy, the different ministers have faced the struggle for scant public resources and, probably thinking that HIV/ AIDS spreads mainly through sexual intercourse and i.v. drug abuse, they might have concluded that the infection is neither a priority nor a threat to public health [9] . Political leaders, authorities, economists, members of the society and HIV affected people must join efforts to face AIDS epidemic [10] . Finally, fairness and compassion for the poor justify prevention and relief of the epidemic by the state. Policies aiming at reducing poverty will lower economic obstacles hindering the access of the poor to basic services of prevention and treatment of HIV [11] . Moreover, fostering development and reducing the speed of virus propagation may offer many additional benefits. While these benefits are sometimes difficult to quantify, they will be complemented by policies that have direct effects on the cost and benefits implied in the adoption or not of risky conducts [8] . What strategies should governments apply to obtain, despite limited resources, the best possible results in AIDS prevention? According to the principles of public sector economy, governments must guarantee the funding of measures that are essential to stop HIV propagation. For lack of adequate incentive, private companies or individuals are not prepared to fund or else apply these measures themselves [3] . These interventions would include, for example, reducing negative attitudes leading to risky conducts in people. Reducing negative attitudes strongly justifies subsidizing measures aimed at promoting safer behavior among individuals who are more exposed to contracting and spreading HIV. The consequences of HIV/ AIDS affect those who contract the disease in the first place [12] . Drugs administered to alleviate symptoms and treat opportunistic intercurrent disease may palliate suffering and prolong the productive life of infected individuals, sometimes at a low cost [9] . Measures to protect the poor from the effects of an HIV/AIDS epidemic should never be neglected. This could be enough to put a considerable curb on the progress of a potential epidemic [13] . In order to maximize the impacts of slender available resources, public prevention programs must avoid the greatest possible number of HIV secondary infections or complications per currency unit invested by the state [14] . Moreover, priority must be given to sate intervention increasing private and public health care systems [15] . AIDS prevention programs usually offer important social benefits, apart from preventing the epidemic; these benefits, and the synergy between interventions and policies, must be taken into consideration when assessing cost and benefits [16] . Some interventions, as reproductive health servicies and HIV/AIDS education at schools, offer wide social benefits apart from those directly connected with AIDS prevention and are low cost, and that is why they usually represent good investment for politicians [17] . The criteria used to direct the programs toward certain beneficiaries are not perfect and, as a consequence, the provision of helping those who are more exposed and vulnerable, as the poorest sectors of society are, may be difficult [18] . In many cases, it is possible to increase cost-effectiveness of official AIDS prevention programs through the joint efforts of NGO and severely affected people in the designing and execution of these programs [2] . The economic impact of HIV/AIDS presents huge challenges. While the causality between poverty and HIV is not clear, it is certain that HIV pushes households and individuals into poverty. While many illnesses create catastrophic expenditures which can result in poverty, HIV/ AIDS is among the worst because its victims are ill for a prolonged period of time before they die, and many are the chief household income earners [19] . In our country, the HIV/ AIDS epidemic is evidently focused on the poorest sector, where the infection has been generalized and is producing the worst consequences. In fact, the effectiveness of health programs in ensuring the access of the poor to the best possible health care in AIDS has never been assessed. It is clear that the government must control that costs and effects of state interventions are closely watched so that the costefficiency ratio may be increased [4] . There are many NGO that may contribute or are contributing to these efforts. Among them, enterprises, nonprofit entities, private charitable organizations, foundations and "common interest groups" are made up of HIV/AIDS affected people. These organizations have made important contributions to fight against the epidemic [1] . According to our investigation, the epidemic made an important impact on homes and, in general, on the magnitude and depth of national poverty. Homes and families have made up for the loss of their adult members for AIDS the best they could. The have re-distributed resources, for example, by taking children out from school to help at home, by increasing the number of working hours, readjusting the number of persons in the household, selling family goods and requesting financial help and in kind to family and friends [5] . For the poorest families, it is more difficult to face this situation as their resources are scant or nonexistent. Children may be affected for life because of malnutrition or lack of schooling [12] . At the same time, there are many equally poor homes where, despite AIDS not having taken a toll, children are similary handicapped due to their extreme poverty [7] . Simultaneously, some households have enough resources to face the death of an adult member without having to resort to official or NGO´s aid. Therefore, authorities will generally reach their aims in matters of fairness more effectively if when focusing their help they take not only poverty indicators into account but also the presence of AIDS in the homes [10] . It is essential that available resources reach the homes of the neediest people through the combination of poverty reduction programs and measures mitigating the impact of HIV/AIDS epidemic. I believe this research points to a group of people who are particularly vulnerable to HIV/AIDS because of poverty and who are suffering its devastating results. This research offers an analytic frame to decide which governmental interventions should be given top priority to combat the epidemic [5] . We are the first generation to possess the resources, knowledge and skills to eliminate poverty. Experience shows that where there is strong political resolve, we see progress. And where there is partnership, there are gains [14] . The world has been living with the HIV/AIDS epidemic for some thirty years, and prevention methods have been scientifically proven and disseminated to the public for nearly as long [1][2][3] . Yet, there are, according to the Joint United Nations Programme on HIV/AIDS High Level Commission on HIV Prevention, at least 7 000 new HIV infections every day -an alarming number that indicates HIV/AIDS awareness is at an unacceptable level of neglect by governments, civil society, and the private sector [20] . In September 2011, we will have a historic opportunity to build on and improve the performance of the past three decades. The promises world leaders will make, and words they will speak, will define the decade ahead: the decade that I believe will signal the beginning of the end of AIDS [1] . --- Conflict of interest statement We declare that we have no conflict of interest.
Objective: To study anemia in AIDS patients and its relation with socioeconomic, employment status and educational levels. Methods: A total number of 442 patients who visited the Infectious Diseases University Hospital in Buenos Aires, Argentina were included in the study. Patients were dividied into two groups, i.e. one with anemia and the other without anemia. Anemia epidemiology and its relationship with educational level, housing, job situation, monthly income, total daily caloric intake and weekly intake of meat were evaluated. Results: Anemia was found in 228 patients (54%). Comparing patients with or without anemia, a statistically significant difference was found (P<0.000 1) in those whose highest educational level reached was primary school, who lived in a precarious home, who had no stable job or were unable to work, whose income was less than 30 dollars per month, whose meat consumption was less than twice a week or received less than 8 000 calories per day. Conclusions: The high prevalence of anemia found in poor patients with AIDS suggests that poverty increases the risk to suffer from this hematological complication. The relationship between economic development policies and AIDS is complex. Our results seem to point to the fact that AIDS epidemic may affect economic development and in turn be affected by it. If we consider that AIDS affects the economically active adult population, despite recent medical progress it usually brings about fatal consequences, especially within the poorest sectors of society where the disease reduces the average life expectancy, increases health care demand and tends to exacerbate poverty and iniquity.
Introduction According to the World Health Organisation , mental health disorders caused 13.1% of the global burden of disease in 2004; with unipolar depression predicted to be the greatest cause of disability burden worldwide by 2030, this already high percentage is set to rise further . However, funding for mental health services is still considered low priority, with almost one third of all countries not having a specific mental health budget . Of the countries that do, around one fifth spends less than 1% of their total health budget on mental health and decision makers have been considering community-based resources to address this shortfall . One such community resource is social capital. Defined by Putnam as "social networks and norms of reciprocation", communities deemed rich in social capital consist of individuals who demonstrate high levels of generalised trust, high social and civic participation and high levels of generalised reciprocity . These individual-level social capital proxies are described as having a 'structural' dimension, relating to social networks, and a 'cognitive' dimension, relating to individuals' perceptions of trust and reciprocity . The two dimensions have been hypothesised to act in different ways to affect health outcomes, with many studies showing strong association between high levels of social capital and positive general health outcomes . In comparison, studies researching social capital and psychological wellbeing demonstrate less consistent results, with individual-level 'cognitive' social capital studies showing a more consistent inverse association with poor psychological health than studies investigating 'structural' measures; and no obvious pattern of association emerging from ecological-level social capital studies and psychological health . Despite this fact, policy-makers worldwide, including WHO and the World Bank, have employed elements of social capital as a means to promote and improve the mental health of populations . How social capital affects health outcomes is considered contentious . Kawachi et al. originally postulated that communities with high levels of social capital were more likely to deter 'deviant' behaviours such as drinking, smoking and crime, maintain access to local resources and even promote healthier behaviours, such as regular exercise. It has been further postulated that individuals perceiving high levels of trust and reciprocity in their communities have better health, due to reduced exposure to chronic stressors . These theories equally apply to psychological health, as regular physical exercise and maintaining access to resources affect psychological health outcomes, and high crime levels and chronic stressors are known precursors to worse psychological wellbeing . Further, active social participation, considered the "cornerstone" of social capital generation , has a positive affect on psychological wellbeing through increasing social ties and community integration . A further issue surrounding social capital is that as a contextual phenomenon, it cannot be directly observed or quantified; this begs the question as to how social capital and its effects are empirically measured and tested. Regarding measurement, social capital is often quantified using individual-level proxies, such as generalised trust, voluntary group participation, voting levels and perceived reciprocity . Once measured, however, there is still the issue of testing. One school of thought is to aggregate individual-level indicators to a contextual level in order to capture contextual effects . In practice, however, contextual levels are often chosen solely by availability of data and may hold little relevance to individuals' day-to-day social interactions. Furthermore, any contextual-level effects may be the result of confounding if individual-effects are not also taken into account . In the absence of appropriate community-level contextual units , the only option is to measure the effects of social capital at the individual-level. In doing so, however, one invites criticism that it is the effects of social support being measured, as social isolation and poor social networks have long been associated with poor health . This is most apparent when 'social participation' is used as a measure of social capital, as it is not difficult for readers to equate this source of social capital as a potential source of social support. To avoid such critique, we must therefore include social support variables alongside the individual-level social capital proxies in our investigation, to reduce any potential confounding of association. Furthermore, it is also vital that we keep 'cognitive' and 'structural' dimensions of social capital as separate entities, as the 'structural' dimension is the one most likely to influence health along social support pathways . One social support mechanism known to influence health is the role of marriage. Marriage has independently been shown to reduce morbidity and mortality and is thought to reduce risk-taking behaviour and stress , mirroring presumed causal pathways that elements of social capital act upon . Marriage is also thought to provide a level of health 'protection' via emotional and financial support for the individuals concerned . Though happy marriages are shown to contribute to better psychological health , marital distress/breakdown and remaining unmarried are, however, strongly associated with worse psychological health . Socio-economic status also has a positive association with psychological health outcomes , though its influence seems to depend on which measures of SES are used and how psychological health is measured . From the above, the potential for confounding the association between social capital and psychological health is great, unless multiple measures of social support and SES are also considered. The aim of this panel study is to research different dimensions of individual-level social capital , SES and social support against self-rated psychological health over a seven year period. Along with known confounders , considered variables will be individually and simultaneously tested, revealing any association with changes in psychological health over time. --- Materials and methods --- Data collection The British Household Panel Survey is a longitudinal survey of randomly selected private households, conducted by the UK's Economic and Social Research Centre. Details of the selection process, weighting and participation rates can be found on-line in the BHPS User manual . Since 1991, individuals within selected households have been interviewed annually with a view to identifying social and economic change within the British population. The Research Centre fully adopted the Ethical Guidelines of the Social Research Association; informed consent was obtained from all participants and strict confidentiality protocols were adhered to throughout data collection and processing procedures. The raw data used for this panel study come from the BHPS individual level responses in years 2000, 2003, 2005 and 2007. --- Dependent variable The dependent variable in this study is self-rated psychological health, obtained using the 12item General Health Questionnaire . Depending on the answers obtained from the twelve items offered by this instrument, respondents were deemed to have either 'good' or 'poor' psychological health . Although there are more complex instruments to measure psychological health, there seems little difference in validity between them and the GHQ-12 item used here . All data were stratified by baseline psychological health to create two distinct cohorts. This was done in order to track changes in PH over time from baseline. Individuals from the 'Good PH' at baseline cohort whose PH deteriorated over time were the subject of investigation in model one. Likewise, those in the 'Poor PH' at baseline cohort whose PH improved over time were the subject of investigation in a second separate model. --- Independent variables --- Social capital variables Our individual-level social capital items were interpersonal trust, active social participation and frequency of talking with neighbours. According to Putnam ), communities with high levels of social capital consist of individuals who are more able to trust one another, who actively participate in local groups, and demonstrate high levels of generalised reciprocity. Though no specific reciprocity data were available, we also deemed 'frequency of talking with neighbours' a suitable social capital proxy. Interpersonal trust was assessed by asking people: 'Generally speaking, would you say that most people can be trusted, or that you can't be too careful?' Those respondents who stated that most people could be trusted were labelled 'Can trust others'; all other responses were labelled 'Can't trust others'. Social participation was measured by asking respondents questions about being active members of community groups, local voluntary organisations, or any sports, hobby or leisure group activity within the community. Only those who answered positively to any of these were judged to participate, with all others being labelled 'No participation'. Frequency of talking to neighbours was also considered a measure of social capital. Possible responses were: 'Most days, once or twice a week, once or twice a month, less than once a month, or never'. Those answering 'most days' or 'once or twice a week' were assigned the label 'two or more times per week'; the rest were assigned the label 'less often'. --- Socio-economic status variables Education level was categorised as 'University or higher', 'Year 12' and 'Year 10 or less'. Social class was determined by occupation . The usual six categories were dichotomised into 'higher' and 'lower' social class. Household income was weighted according to size by summing the income of all household members and dividing this sum by the square root of the household size . This item was maintained as a continuous variable per £1000 increase and was an expression of total income, net of any taxation. Employment status was categorized as 'Employed', 'Retired', 'Fulltime student' or 'Unemployed'. --- Social support variables Respondents were asked if they were 'married, separated, divorced, widowed or never married'. Marital status was recoded into married and unmarried '. A further variable 'Lives alone' was also used to try to capture more information about those individuals who co-habited. Frequency of meeting with friends was considered a measure of potential social support. Possible responses were: 'Most days, once or twice a week, once or twice a month, less than once a month, or never'. Those answering 'most days' or 'once or twice a week' were assigned the label 'two or more times per week'; the rest were assigned the label 'less often'. --- Confounders Age and gender were considered confounders in this study, age being stratified into quintiles . --- Statistical analyses Each independent variable was run against the dependent variable in bivariate analyses using Generalized Estimating Equations , with an autoregressive working correlation structure , utilising the 'sandwich' covariance estimator . Reasoning behind this choice of model was twofold: firstly, repeated observations within the same subject are not independent of each other -the correlation structure corrects for this. Secondly, when examining timedependent variables , GEE estimates the "real influence" of independent variables on outcome over time, by also correcting for the previous value of the outcome at , as illustrated by the equation: Y t =  0 +  1 Y t-1 +  2 Y t-1 + … + u t. All analyses were conducted within the statistical software package STATA 11.0 . The presence of social capital, higher education, household income and social class, employment, being married, cohabiting and meeting friends more often were all hypothesized to be associated with better psychological health over time. --- Results Table 1 shows the frequencies and total percentages of all the variables at baseline, stratified by psychological health, derived from Wave ten of the BHPS. This stratification represents the two separate cohorts under investigation, as previously explained in the 'dependent variable' section. The bivariate analyses results are presented in table 2 as prevalence and odds ratios with 95% confidence intervals . The prevalence percentage demonstrates those individuals with 'Worse' or 'Better' psychological health compared to the baseline within each variable investigated. A multivariable GEE model was also built for both PH cohorts, adjusting for all statistically significant variables identified from bivariate analyses. Results from these models are presented in table 3 as ORs with 95% CI. --- Bivariate analysis -'Worse PH over time' As shown in table 2, column 1, the social capital variables 'cannot trust' and 'do not participate' were more likely to have worse PH compared to baseline . Talking less with neighbours was also associated with worse PH over time . None of the SES variables were statistically significant in bivariate analyses. Regarding the confounders, those individuals who were younger seem more likely to have worse PH over time, and females were 56% more likely to experience worse PH at follow-up than males. Being unmarried was the only measure of social support showing significant association with worse PH over time. --- Bivariate analysis -'Better PH over time' As shown in table 2, column 2, the only social capital variable significantly associated with better PH over time was generalised trust. Regarding SES variables, increasing household income and higher social class were also associated with better PH over time. Being married and not living alone were the only social support variables significantly associated with better PH compared to baseline levels in bivarate analysis. The confounders age and gender were also strongly associated with better PH over time. --- Multivariable analysis -'Worse PH over time' As shown in table 3, column 1, being unable to trust and talking less with neighbours maintained their significant association with worse PH over time. Being unmarried was the only social support variable still significant in multivariable analysis. The confounders age and gender maintained their significant association with worse PH over time. --- Multivariable analysis -'Better PH over time' As per table 3, column 2, only the variables trust, marital status and household income, along with confounders age and gender maintained significant association with better PH over time. All other measures of social support, social capital and SES were rendered insignificant in multivariable analysis. --- Discussion The aim of this panel study was to compare any association between different individuallevel indicators of social capital, SES and social support with changes in individual psychological health outcomes over time. In multivariable GEE analysis with an autoregressive working correlation structure, only the social capital variable generalised trust, the social support variable marital status and confounders age and gender maintained their association with PH over time . No SES variable remained statistically significant bar one, household income . Our results mirror previous research regarding gender differences in psychological wellbeing. As numerous international studies also show that females are 1.5 to 3.0 times more likely to experience worse PH than males , this adds plausibility to our other results. The results also support previous research demonstrating that marriage protects against worse PH over time. As the autoregressive working correlation structure corrects for changes in individuals' marital status over time, this 'robust' result confirms that remaining married, or becoming married during this seven year period, is strongly associated with better PH . Marriage is thought to have both a 'selection' and a 'protection' effect regarding PH, i.e. healthy individuals self-select into marriage and, as touched upon in the introduction, marriage is thought to 'protect' via emotional and financial support mechanisms . It is interesting to see that increased age seems to offer protection against worse PH. In the past, the reverse seemed more commonplace , but our results show, as do others, that worse PH is becoming more prevalent in younger age groups .That younger age groups are also strongly associated with better PH over time at first glance appears to contradict the previous statement. However, what this most likely demonstrates is younger individuals' greater ability to recover. After considered confounders, generalised trust has the strongest association with PH over time in both multivariable models. Though there are fewer studies specifically examining association between social capital and PH compared with general health, , our results reflect earlier research implying some level of consistency concerning the 'conceptual' dimension trust . It may seem obvious that there is association between trust levels and psychological health, as lack of trust is often associated with clinical psychoses. However, we should point out that the GHQ-12 instrument is not a diagnostic tool used by professionals to determine the mental health of patients, but a screening instrument designed to ascertain levels of anxiety, depression and loss of confidence, in non-clinical settings . Therefore, generalised trust should be considered as an individual's expression of their community's level of social capital in this study , not paranoia. The 'cornerstone' of social capital generation, active participation shows no significant association with PH in multivariable models. This result adds to the increasing volume of research demonstrating that the separate dimensions of social capital are not as closely correlated as first thought . Fukuyama's concept of 'miniaturization of community' further describes how active group participation may not necessarily generate interpersonal trust. He distinguishes between quantity and quality group participation, the 'miniaturization of community' being one by-product of high group participation by individuals with low radii of trust . According to Fukuyama, without quality social participation there can be no gains in interpersonal trust within the community . Interestingly enough, individuals who demonstrate this 'high participation-low trust' combination have worse PH than those who both trust and participate . 'Miniaturization of community' is just one consequence of a greater shift in cultural norms described by Fukuyama , which has occurred across many high income countries since the 1960s . This 'shift' comprised of, among other things, an increase in levels of crime, higher divorce rates and breakdown of the traditional family unit. Fukuyama and Putnam have also described general declines in levels of trust over a similar timeframe. This, in our opinion, is no coincidence; moreover, we hypothesize that the decline in trust could be one reason why there has been an increase in worse psychological health in youth over the same period. To expand and clarify; Coleman stressed the importance of the traditional family unit as a conduit for social capital. Coleman believed that if parents spent quality time with their children and clearly articulated codes of conduct regarding acceptable and unacceptable behaviours, this would ensure the next generation understood accepted norms of reciprocity and trust. In other words, family capital investment enabled youth to generate social capital. It is not inconceivable that breakdown of the traditional family unit -e.g. through divorce, lone parent families or both parents working full time -could lead to a reduction in family capital investment in youth by parent. This in turn could mean that successive birth cohorts since the 1960s have reduced ability to maintain previous levels of social capital, thus leading to the reported decrease in trust levels across some societies . Coleman also writes that high family capital investment reduces delinquent behaviour; ipso facto, reduced family capital investment could also contribute to higher levels of crime and the further decline of trust over time . --- From trust to psychological health It has been argued that generalised trust is not just a reflection of community-level social capital but is indicative of an individual's level of perceived social stress and possible health status . The 'psychosocial' pathway from stress to health is via the hypothalamic-pituitary-adrenal axis, and is one plausible mechanism by which individuals' perceptions can lead to physical changes in the body over time . In recent years, this same pathway has also been linked to psychological health; HPA axis dysfunction, in response to perceived stressors, plays a significant role in the development of mood disorders . If lower levels of trust are indicative of higher social stressors, then it seems plausible that the decline in trust could lead to deteriorating physical and psychological health in individuals. Following this line of discussion, we could further hypothesise that maintaining traditional family structure is a determinant of social capital for future generations, which in turn may protect against worse PH. Results from previous studies lend credence to this hypothesis: social capital has been reported at higher levels within 'intact' families than within single-parent families , and population-based research has shown that youths born of teenage mothers are more susceptible to worse PH . Thus breakdown of traditional family structure could be the first step in one pathway affecting PH in future generations. With this in mind, policy makers, whilst developing welfare solutions in response to breakdown in traditional family structure , must also consider any perverse incentives they provide. Education empowers individuals; providing welfare without maintaining excellent levels of free/subsidized education could inadvertently promote further breakdown in traditional family structure, if disempowered individuals perceive welfare as a viable lifestyle choice. --- Strengths and weaknesses A major strength of this study is the fact that it is longitudinal, covering a seven year time frame with a high number of individual respondents . The unique design of this study captures association between our independent variables and any change in psychological health. Coupled with the auto-lag correlation structure, baseline stratification further allows us to infer causality by estimating the true influence of explanatory variables on changes in psychological health over time. The fact that the data were obtained via interview rather than relying on postal questionnaires contributed to the very high participation rate of around 90%, year on year . Despite us being unable to compare our results against longer assessment tools, the GHQ-12 item is still considered a valid and reliable indicator of psychological health . By investigating three different individual-level indicators of social capital, along with multiple SES and social support variables, we ensured that well-known health determinants were also included in the analyses, thus reducing the risk of potential confounding. Though there is no 'gold standard' with which to validate against, generalised trust is considered a proxy of social capital . A major limitation of this study is that the BHPS sample was originally selected to reflect the UK population as a whole and deliberately avoided oversampling of smaller sized communities. Due to sampling and collection methods, the longitudinal data were unsuitable to perform any meaningful contextual analysis at the community-level. By year 2000, only 62.0% of the original cohort members were able to answer the questions posed . This would have introduced further selection bias into this study. Another limitation is our social capital variables were only available in four of the seventeen 'waves'. Marital status was reduced to the dichotomous 'married' and 'not married'; though this method of reduction has been previously validated , it may hide more complex pathways regarding cohabitation, common in society today. The 'Lives alone' variable was included in an attempt to recapture this detail. Allowing significance levels to dictate the content of our final model could have similar disadvantages to using a stepwise analysis. However, we ran all hypothesized variables in one separate analysis for the sake of 'correctness'; the results on the independent variables in table 3 essentially did not differ . --- Conclusion Our study confirms that a strong positive association remains between the 'cognitive' social capital measure generalised trust and psychological health over time, even after taking many other social support and SES variables into consideration. We consider the decline in trust over recent decades to be associated with reduced family capital investment, a possible consequence of traditional family unit breakdown. Furthermore, we argue that this decline in trust may be associated with increases in worse PH in successive birth cohorts. Policy makers, whilst justified in providing welfare solutions in response to breakdown in traditional family structure, must also consider perverse incentives they provide. If perceived as a viable lifestyle choice, welfare provision could inadvertently promote further decline of trust, at even greater cost to society. or 'good' psychological health . If three or more of all the 12 items denoted 'poor' psychological health, the general psychological health was denoted as 'poor'. --- Appendix The items included in the GHQ-12 are 'Have you felt tense during the past weeks?', 'Have you had problems with your sleep during the past weeks?', 'Have you been able to concentrate on what you have been doing during the past weeks?', 'Do you feel that you have been useful during the past weeks?', 'Have you been able to make decisions in different areas during the past weeks?', 'Have you during the past weeks been able to appreciate what you have been doing during the days?', 'Have you been able to deal with your problems during the past weeks?', 'Generally speaking, have you felt happy during the past weeks?', These eight items had four alternative answers: 'More than usual', 'As usual', 'Less than usual' and 'Much less than usual'. The items were dichotomized with two alternatives denoting 'good' psychological health and two alternatives denoting 'poor' psychological health, i.e. for the two first questions 'More than usual and 'As usual' denoted 'poor' psychological health and for the following six questions they denoted 'good' psychological health. Four other items had somewhat different alternative answers: 'Have you felt unable to deal with your own personal problems during the past weeks?', 'Have you felt unhappy and depressed during the past weeks?', 'Have you lost faith in yourself during the past weeks?' and 'Have you felt worthless during the past weeks?'. The four alternative answers to these four items were: 'Not at all', 'No more than usual', 'More than usual' and 'Much more than usual'. The answers to these items were also dichotomised to denote either 'poor' psychological health
The positive association between social capital and general health outcomes has been extensively researched over the past decade; however, studies investigating social capital and psychological health show less consistent results. Despite this, policy-makers worldwide still employ elements of social capital to promote and improve psychological health. This United Kingdom study aims to investigate the association between changes in psychological health over time and three different individual-level proxies of social capital, measures of socioeconomic status, social support and the confounders age and gender. All data are derived from the British Household Panel Survey data, with the same individuals (N = 7994) providing responses from 2000-07. The data were split according to baseline psychological health status ('Good' or 'Poor' psychological health -the dependent variable). Using Generalised Estimating Equations, two separate models were built to investigate the association between changes from baseline psychological health over time and considered variables. An autoregressive working correlation structure was employed to derive the true influence of explanatory variables on psychological health outcomes over time. Generalised trust was the only social capital variable to maintain a positive and highly significant (OR 1.32, p < 0.001) association with psychological health in multivariable models. All measures of socioeconomic status and social support were rendered insignificant, bar one. We argue that the breakdown of the traditional family unit (and subsequent reduction in family capital investment), along with psychosocial pathways, demonstrate plausible mechanisms by which a decrease in generalised trust could lead to an increasing trend of worse psychological health in youth over successive birth cohorts. Policy makers, while providing welfare solutions in response to breakdown in traditional family structure, must also consider perverse incentives they provide. If perceived as a viable lifestyle choice, welfare provision could inadvertently promote further decline of trust, at even greater cost to society.
Introduction Telework refers to working outside of the office or another physical organizational setting, such as within one's home or from another location, often using a form of information communication technology to perform work tasks and communicate with others both in and outside the organization [1]. To date, various organizational, political, and social factors have contributed to the rise and development of telework programs in the United States For a review of the history of telework, see Allen et al. [2] Research about the effectiveness of telework has gained popularity within the past decade and telework has recently emerged as a highly important and relevant issue due to its increased prevalence during the COVID-19 pandemic [3]. For instance, at the height of the pandemic approximately 70% of United States workers with jobs conducive to telework were working from home or in a remote capacity [4]. Prior to the pandemic, approximately 3.6% of the U.S. workforce and 5.4% of all workers in the European Union reported teleworking full-time, and a greater number reported teleworking from home at least some of the time. First, it is important to define what we mean by telework to clarify the various terms that have been used to describe this type of work arrangement. Terms such as telecommuting, remote work, homework, virtual work, flexible work, and distributed work have been used interchangeably with alternating definitions in the literature [2]. This lack of consensus has led to challenges when evaluating prior research and findings due to changes in the implementation and location of this form of work. For the purpose of this review, we rely upon the following definition provided by Allen et al. [2]: "Telecommuting is a work practice that involves members of an organization substituting a portion of their typical work hours to work away from a central workplace-typically principally from home-using technology to interact with others as needed to conduct work tasks" The purpose of this article is to summarize research regarding the associations between telework and worker health and well-being based on a thorough and multi-disciplinary review of the telework, work design, ergonomics, and occupational health psychology literature. Prior research on telework has largely focused on work-related outcomes, such as performance, rather than worker health and well-being when considering telework. However, due to the increased prevalence of telework over the last decade and the sudden and large increase due to the COVID-19 pandemic, there is a critical need to understand how teleworking may impact workers' physical and psychological well-being. Prior literature reviews have also left substantial gaps relating to our understanding of how telework relates to worker health and well-being. Bailey and Kurland [1] discussed definitional and methodological challenges associated with telework research, as well as demographics relating to the "who, where, and why" of teleworkers. However, their review did not provide comprehensive coverage of the potential outcomes of teleworking. In a more recent review, Allen and colleagues [2] summarized the state of telework research, citing the importance of the extent of telework in research, as well as provide a thorough explanation of outcomes related to both work and social/family outcomes. However, the authors did not provide much insight into outcomes at the individual-level nor health and well-being-related outcomes associated with telework. Finally, Tavares [5] primarily focused on the pros and cons of telework and its proposed health effects but did not provide a broader picture of the relational components guiding the relationship between telework and worker health and well-being. Nor does the review explain the conceptual and theoretical frameworks guiding the current state of telework research today. Our review aims to address these gaps. --- Current Review In the current review, we propose a conceptual framework for organizing and synthesizing telework and worker health and well-being research across disciplines. Our model includes predictors, mediators, moderators, and outcomes of telework at the individual worker, social and family, and organizational levels of analysis. We explain these components in-depth, followed by recommendations for future research to advance our knowledge about substantive topics and address methodological issues. We conclude the paper with recommendations for organizational policies and practices to support positive worker health and well-being. We conceptualize worker well-being broadly and consistently with a Total Worker Health ® definition described by Chari and colleagues [6], who defined well-being as "quality of life with respect to an individual's health and work-related environmental, organizational, and psychosocial factors. Well-being is the experience of positive perceptions and the presence of constructive conditions and work and beyond that enables workers to thrive and achieve their full potential" . Additionally, in this article we focus on worker health and well-being, in lieu of work-related outcomes such as performance, due to the important associations between working arrangements and conditions and workers' physical and psychological wellbeing, including chronic diseases , pain, musculoskeletal injuries and conditions, anxiety, depression, job satisfaction, and worker engagement [7,8]. --- Methodology Our review sought to investigate the various ways in which telework relates to worker health and well-being based on the definition of well-being established by Chari et al. [6] To identify articles for our review, we conducted multiple literature searches using Google Scholar and PRIMO search engines to investigate research published through January 2022. The keywords used for literature searches included: "telework", "remote work", "telecommute", "telecommuting", "occupational health", "occupational health psychology", "work design", "ergonomics", "job demands", "job resources", "job characteristics", "well-being", "stress", "strain", "work and family", "health", "physical health", "mental health", "sleep", "gender", "age", "COVID-19" and "COVID". Additionally, we conducted searches with Google Scholar to identify articles which cited previous telework reviews, and also reviewed articles cited within previous reviews. Finally, we contacted members of our professional networks to request published and in-preparation papers about telework. We included articles in the current review which investigated telework as defined by Allen et al. [2], even when participants' telework was conducted after hours, on the weekend, or at a remote teleworking center. Due to language restrictions, we only reviewed articles written or translated in the English language, although we did not set restrictions regarding study region. Finally, we chose to begin our search with articles published during or after the year 2000, because prior reviews [1,2] thoroughly covered the literature prior to that time period. --- Theoretical Background There are two dominant theories in occupational health literature that facilitate our understanding of the relationship between telework and occupational health: the jobdemands-resources model in occupational health psychology, and macroergonomics systems theory in ergonomics. Next, we discuss these theories and how they contribute to our model of telework and worker health and well-being. --- Job Demands-Resources Prior studies have relied upon the job demands-resources model [9,10] to explain the relationship between telework and worker health and well-being [11,12]. According to this model, when individuals have insufficient resources to meet their job demands, burnout and strain result [9]. Job demands are the physical, social, or organizational components of a worker's job that require physical or mental effort, consume energetic resources, and are associated with physiological and psychological costs such as somatic health complaints and exhaustion. Job resources are aspects of a worker's job which fulfill basic psychological needs and may be used to alleviate job demands. Within our own model of telework and worker health and well-being, we suggest that telework is a job resource, and in particular a structural resource that may be used once or over time [13], to improve workers' ability to meet the demands of their job. We also incorporate the notion of personal demands, the individual standards a person sets for their performance and behavior [10], and personal resources, the positive characteristics of an individual which relate to their ability to successfully impact and control their environment, which interact with the efficacy of telework as a job resource. Examples of personal demands include perfectionism and emotional instability, whereas examples of personal resources include optimism and self-efficacy. From another perspective, investigators have focused on how the unique work arrangement created by telework presents new concerns for identifying additional context-specific demands and resources [11,12]. Some evidence suggests that teleworking changes typical job demands and resources available within the virtual work context [11]. Specifically, telework has been shown to be negatively related to exhaustion, partially due to the reduction of job demands such as reduced time pressure and role conflict, and increased perceptions of job resources such as autonomy workers experience while teleworking. On the other hand, telework has been shown to be negatively related to workers' engagement in their job, in part due to reductions in feedback and social support workers experience when teleworking, as well as increased role ambiguity, which may result when choosing to work away from one's central organization. Given the prior discussion, we also draw upon a work design perspective of telework in which job characteristics interact with the work environment , and incorporate a variety of job-specific and contextual factors which may influence telework's role as a structural resource. --- Macroergonomics Systems Approach Teleworkers' health and well-being rely on a composite of job resources, such as job characteristics, workspace design, ergonomic support, and information and communication technology . Ergonomics has been defined as "the use of knowledge of human abilities and limitations to the design of systems, organizations, jobs, machines, tools, and consumer products for safe, efficient, and comfortable use" [14] . Ergonomic science is not bound to a specific domain, but is broadly concerned with the interaction between humans and a given system, such as a given organization or the organization of one's work [15]. Macroergonomics is the study of work systems represented through workers working together, using technology, within an organizational system [16]. This organizational system is represented through an internal environment, both physical and cultural. The effectiveness of the organization system is shaped by the design of both the technological and personnel sub-systems, and how well these components are designed in respect to one another. Within the teleworking context, macroergonomics is crucial for determining how best to implement and support ICT for work [17] and understanding risk factors for employee health and safety [18]. The interaction between the organization, employee, and the effectiveness and availability of technology is a primary contributor to successful teleworking as will be reflected throughout our review of the teleworking literature. --- A Conceptual Model of Telework and Worker Health and Well-Being Figure 1 lists the antecedents, outcomes, mediators, and moderators of telework at the individual, social, and organizational level to provide a holistic picture of factors related to telework and worker health and well-being. We developed the model below to organize the various factors associated with teleworker health and well-being based on our review of the empirical literature that we summarize in this article. We encourage other researchers to draw upon, as well as further develop, our conceptual model when pursuing future research oriented at an understanding of the health and well-being outcomes associated with teleworking. --- Antecedents In the following sections, we describe various antecedents within the telework context associated with teleworker physiological, psychological, and professional health. --- Demographics --- Gender There are a variety of teleworking outcomes that differ by gender. For instance, women are more likely to be expected to combine multiple roles when teleworking within the household and experience greater role conflict as they manage multiple roles at once, such as employee, partner, caregiver, or parent [19]. Additionally, women teleworking in the EU during the COVID-19 pandemic demonstrated greater odds of experiencing musculoskeletal pain and discomfort overall, were more likely to report high severity pain and discomfort and experienced more family-to-work conflict than men [20]. Furthermore, women, regardless of the presence of children in the home, reported significantly higher levels of pain and discomfort when teleworking than men with children, and those with children also reported significantly higher levels of stress than men with children. However, women without children present when teleworking reported less work-to-family and family-to-work conflict than both men and women with children. Thus, it is possible the adverse effects of teleworking for women's physical and psychosocial health as compared to men may be due, in part, to their likelihood to assume multiple roles when teleworking [19], and these effects are exacerbated when taking childcare responsibilities into consideration. Future work should aim to separate effects on teleworker health and well-being by as a result of one's gender, or traditional gender biases, versus as a result of one's family structure or caregiving considerations. Furthermore, there are other important psychophysiological differences by gender relevant to telework. For example, in a mixed-method study, men had significantly higher levels of epinephrine, more commonly known as adrenaline, in the evenings when teleworking during the day than women [21]. The authors speculate this difference may be due to men being more likely to continue to work longer into the evenings than women when teleworking, as evening levels for men were higher on days working from home than when working from the office. Qualitative findings from this study also supported this assumption but did not allow for a statistical comparison. --- Age As the population ages and the traditional retirement age increases, employers are faced with many unique worker retention and recruiting challenges [22]. One way to effectively retain, recruit, and support older workers may be the use of flexible work practices, including telework [23]. Prior research has shown that telework usage, specifically the amount of time spent teleworking and type of ICT usage, does not vary between young and older workers [24,25]. However, older workers have reported lower ratings of self-evaluated computer skills within the telework context and willingness to telework overall [25]. Still there is little, if any, empirical investigations that have evaluated whether these factors contribute to deviations in the health and well-being of older workers. Considering the JDR model, it is likely that older workers have different job and personal demands compared to younger workers, which may impact their well-being. For instance, older workers have a higher risk of developing chronic health conditions. Middle-age and older workers may have more eldercare responsibilities than younger workers [26]. Under these circumstances, there may be differential health and well-being outcomes between older and younger teleworkers. Future research and practice would benefit from additional studies identifying potential age differences. --- Location and Physical Environment Research regarding differences in telework locations is relatively nascent; however, where an employee chooses to telework may influence the outcomes of that telework on the employee. For instance, home-based teleworkers experience more work-life balance support than client-based workers and those working from remote tele-centers [27]. This may be due, in part, to increased autonomy, flexibility, and a decreased commute time experienced when working from home; whereas, working from a remote tele-center may only help to reduce travel and not provide the flexibility or autonomy needed to promote work-life balance. Remote home-based workers also report higher ratings of job satisfaction than client-based workers, further speaking to the benefits of working from one's home, specifically, versus remote work in and of itself. Beyond the geographic location of where one works, teleworkers might also experience adverse effects as a result of the physical environment or "microclimate" of where they choose to perform their work [28]. In their short review, Bruomprisco and colleagues discussed how factors relating to the physical environment of one's workspace, such as air quality or air circulation systems that promote proper atmospheric conditions, are associated with worker health. For instance, home-based teleworkers who lack proper air quality or humidity within their homes may experience adverse symptoms such as eye and respiratory irritation, headaches, and fatigue among other symptoms. --- Occupation and Industry Jobs vary in the extent to which they can be performed remotely based on the nature of the tasks, work activities, and necessary setting or equipment needed for the job. Examples of jobs that cannot be done remotely/from home are those that involve handling or moving heavy objects, controlling machines or heavy equipment, operating vehicles or mechanized devices, or inspecting, repairing and/or maintaining equipment, structures, or materials [29,30]. During the COVID-19 pandemic, many workers who had not previously teleworked shifted to telework as a measure of social distancing. Using data on work activities and work context in the Occupational Information Network, research during COVID-19 classified 37% of jobs as work that can be done at home [29]. Occupation and industry are associated with whether individuals engage in telework and/or the extent of telework. For example, research has also shown that some industries are more supportive of telework, either because they have the infrastructure available to support telework or a large proportion of workers in occupations that are conducive to telework . There may be certain jobs that can be done remotely, but the organizational or industry norm does not generally support telework. Industries that saw the greatest increase in telework during the COVID-19 pandemic included educational services, finance and insurance, management of companies and enterprises, IT, scheduled air transportation, and professional and technical services. In October 2020, the Pew Research Center [30] conducted a survey about telework among workers in nine industries and found that the majority of workers in four of those industries indicated that their job can be done from home: 84% in banking, finance, accounting and real estate, 84% in information and technology, 59% in education and 59% in professional, scientific and technical services. However, there has not been any research to date comparing worker well-being across occupations and industries in relation to telework. The process by which occupation relates to worker health and well-being is explained in the next section. --- Job Characteristics Job characteristics can vary on many dimensions. Psychological research has identified some job characteristics that affect worker psychological processes, including motivation, experienced meaningfulness at work, and job satisfaction [32]. Job characteristics, such as autonomy, participation in decision-making, and social support, are shown to predict teleworker work-related well-being [33]. Vander Elst et al. originally investigated whether these components mediated the relationship between the extent of telework and four indicators of work-related well-being . No direct or indirect relationship was found between the extent of telework and work-related well-being , as all of the included job characteristics were directly and beneficially associated with work-related well-being. The authors suggested these results reflect that how telework impacts employee well-being depends on how the work itself is organized as well as the organizational practices in place meant to support teleworking arrangements. The role of job characteristics in relation to worker health and well-being has also been studied via qualitative accounts among high-intensity teleworkers in China during the COVID-19 pandemic [34]. In Wang et al.'s mixed method study, workers most frequently referred to the role of job characteristics, such as increased job autonomy and perceptions of work overload, in relation to their work productivity and ability to achieve work-life balance when teleworking. Participants also mentioned the adverse characteristics of teleworking, which they often avoided when working from their central organization, such as monitoring from their supervisors, increased meeting frequency, and increases in loneliness and the need for social support. Furthermore, participant survey responses from Wang and colleagues' [34] study reflect social support as associated with lower levels of procrastination, ineffective communication, work-home interference, and both social support and job autonomy are shown to be associated with lower levels of loneliness. Conversely, participant workload and the extent to which they perceived monitoring from their employers is linked to higher levels of work-home interference. These results are similar to those in Pulido-Martos et al. [35] in which survey responses from workers teleworking at various intensities demonstrated a positive relationship between the levels of social report workers received and their levels of vigor, considered a personal resource while working. Thus, as the job-demands resource model may assume, social support and job autonomy are job resources which alleviate challenges associated with remote working, and workload and monitoring are seen as job demands which compromise employee well-being. However, the authors also found that workers experienced lower levels of social support when teleworking full-time, versus hybrid or in-office workers, and subsequently lower levels of vigor than workers who had a hybrid or face-to-face work arrangement. Nonetheless, these findings support a work design perspective of telework in which job characteristics are antecedents to both employee performance and well-being [34]. Across studies, high levels of social support and autonomy help in overcoming potential challenges of teleworking, such as feelings of loneliness, and in turn are associated with teleworker performance and well-being. --- Extent of Telework In the early stages of telework research, studies often focused on the differences amongst teleworking and non-teleworking workers [2]. Only in recent decades have investigators begun addressing the extent of telework, or the average amount of time an individual spends teleworking as a proportion of their working week [36]. The extent of telework has become a common denominator across teleworking studies. The underlying notion of these investigations is that a worker who teleworks once a week is likely to have differential experiences than a worker who spent their full week teleworking [2]. Along these lines, the extent of telework has been shown to be a significant predictor, as well as moderator, of multiple worker health and well-being outcomes in addition to work-related outcomes such as job performance. A number of studies have shown a positive association between the extent of telework and work-related well-being. For instance, telework is positively related to job satisfaction [36][37][38][39], and especially for those who telework a moderate amount versus more or less frequently [36]. This curvilinear relationship between the extent of telework and job satisfaction has been replicated within multiple studies [36,37]. However, it is important to note that job satisfaction does not decrease dramatically for those who telework more than a moderate amount, but only tapers slightly or plateaus at higher teleworking intensities. One, among many, explanations for the positive association between the extent of telework and job satisfaction is decreased interruptions within one's home and limited exposure to organizational politics [40]. Furthermore, work by Golden [37] indicated that the utilization of telework is related to higher quality relationships with leaders, lower quality relationships with coworkers , and decreased work-family conflict. These relations, in turn, were positively related to job satisfaction and, notably, grew in strength with the amount of time spent teleworking. Conversely, limited face-to-face interactions and social isolation may contribute to the plateau in job satisfaction at higher intensities of teleworking [36]. The extent of telework also has consequences for employee health and well-being. In a longitudinal study using employee health claims, health risk assessment data, and employee remote activity hours, showed the number of teleworking hours has implications for employee health [41]. Employees who worked remotely eight hours or less a month were more likely to reduce their risk of depression over time than non-telecommuters. Furthermore, teleworking more hours per month was associated with lower instances of alcohol abuse and tobacco abuse as well as lower health risks overall as calculated by participant Edington risk score. The opposite was found for stress: the more hours employees telecommuted, the greater the risk for overall perceived stress. However, in a different study, Vander Elst et al. [33] did find that the extent of telework was indirectly related to employee emotional exhaustion, cynicism, and cognitive stress complaints when mediated by social support. Finally, one consideration when evaluating the extent of telework for worker health and well-being is the degree to which our understanding is constrained by the modest time spent teleworking by past employee samples; wherein, empirical work including participants who telework at high intensities is less common. However, the COVID-19 pandemic has provided researchers with the opportunity to evaluate worker health and well-being as a result of high-intensity telework, with many workers transitioning to full-or close to full-time telework, especially during the early stages of the COVID-19 pandemic. We expect upcoming work to provide further clarification on the role of high intensity telework for worker health and well-being. --- Individual Differences --- Personality Characteristics Despite differences in work performance across workers with various personality characteristics [42], little research has addressed the role of personality in predicting teleworking workers' health and well-being outcomes. However, extant literature does tell us that personality plays a role in determining teleworker health [43]. In two field studies, workers who were high in emotional stability, and also reported high autonomy, experienced the lowest levels of psychological strain. Overall, there was a negative relationship between the extent of telework and strain for these workers, which the authors attributed to these workers being able to best meet their needs for autonomy, relatedness, and competence through remote work, which in turn reduced perceived strain. The opposite associations were seen for workers low in emotional stability. In spite of reporting high levels of autonomy, these workers were more susceptible to strain, and were likely to experience more strain as the number of teleworking hours increased. The need for autonomy is also related to higher levels of job satisfaction among teleworkers versus non-teleworkers [42]. In their study, O'Neill et al. evaluated personality characteristics as predictors of teleworker versus non-teleworker performance and job satisfaction . Although only the relationship between the need for autonomy and job satisfaction was significantly higher for teleworking employees, there was some evidence that sociability has a negative association with teleworker job-satisfaction, though this relationship was not statistically significant. Finally, recent research has investigated proactive personality, which refers to one's "tendency to identify opportunities for change, and to act on them until they bring about the desired change" [44] . A recent study by Abdel Hadi et al. demonstrated how having a proactive personality is a beneficial antecedent to teleworker health and wellbeing. In their daily diary study, Abel Hadi and colleagues surveyed German employees about facets of their personality and occupational characteristics as well as their daily perceptions of job and home demands and the extent to which they engaged in leisure crafting during a mandated lockdown during the COVID-19 pandemic in which workers were required to remain in their homes. Petrou and Bakker [45] referred to leisure crafting as a mechanism for proactively pursuing leisure activities aimed at reaching a goal, human connection, learning, or personal development. In Abel Hadi et al.'s study [44], across participants, those who scored higher on a measure of proactive personality also reported more engagement in leisure crafting and less job and home-demands, which in turn were associated with lower levels of emotional exhaustion and better job performance. Thus, as a whole, we might consider both one's emotional stability and proactivity personality as resources which limit the adverse effects of job demands, such as emotional exhaustion, within the telework context. --- Boundary Preferences Prior research has established that telework blurs boundaries between work and home [2,46]. Boundary management refers to the process by and extent to which individuals separate their home responsibilities from their work responsibilities or vice-versa. Within the telework setting, employee boundary strategies range from those which are highly segmented, such as having a separate office for remote work or setting strict working hours within the home, to strategies that integrate or combine roles within the home [47]. Thus, workers may use boundary management strategies to categorize role demands into the domains of either their work or home [48]. Based on the JDR model, one's ability to effectively manage their boundaries between work and home, when also working at home, may be seen as a personal resource which relates to workers' health and well-being. A worker's preferred boundary management strategy is related to various occupational health outcomes for workers. For instance, workers with integration-based strategies tend to report more family-to-work conflict in which family roles and responsibilities interfere with the work domain [48]. Similarly, Allen et al. [49] found that segmentation preferences were positively related to work/non-work balance, and the same association remained consistent over three months. Additionally, those with greater boundary permeability, and especially when nonwork behaviors are interrupted by work-related responsibilities such as by working after hours or on the weekends, are more likely to have increased work-to-family conflict within the telework context [47,50]. Considering the empirical literature, workers who implement segmentation boundary strategies are at a lower risk of adverse occupational health outcomes. In a qualitative study of 40 teleworking workers, Basile and Beauregard [51] identified physical, time-based, behavioral, and communicative strategies that successful teleworkers implemented within their home. These strategies included having a separate office or space to be used for work, engaging in activities which signaled the end of the working day, switching off email or work phones after work hours, and informing friends and family of their boundary expectations regarding interruption during the work week. --- Economic Factors --- Commute Time Reduced commute time has not been shown to be a motivator for teleworking [1] although reductions in driving time, in particular, for teleworkers is linked to reductions in commute stress [26]. However, it is possible that reductions in commute time for teleworkers may benefit workers' physical health. In general, passive commute distance, or distance commuted by vehicle, has a negative relationship with several physical health indicators such as physical activity and cardiorespiratory fitness, and is adversely associated with one's BMI, waist circumference, blood pressure, and metabolic risk [52]. However, the benefits associated with commute time may only arise for workers teleworking a full workday from home. For teleworkers choosing to conduct their work from libraries, cafes, or similar locations, there is a likelihood to avoid peak-hour travel; however, it is unlikely that workers will experience significant reductions in travel time [53]. Furthermore, workers who attend their central workplace before teleworking the remainder of their workday are also unlikely to experience reductions in travel time or avoid peakhour travel, which may also lend to differences in perceived stress between full-versus part-day teleworkers [26]. Full-day teleworkers are also more likely to rely on active modes of transportation when leaving their homes [54], which might serve to benefit employee cardiovascular health. With regard to work-related outcomes, reductions in commute time have also been linked to increased productivity. However, this increase in performance is speculated to be due to longer working hours, as teleworkers may continue working into time typically reserved for driving or other forms of travel [1,2,55]. Notably, longer working hours are, in turn, adversely associated with coronary heart disease and depression [56]. Thus, future research should identify whether reductions in commute time as a result of teleworking, and in particular reductions in passive commute time, has an effect on worker health and health behaviors, and under what conditions we may expect reductions in commute time to relate to positive health outcomes. For instance, we might expect a positive relationship between a reduction in passive commute and cardiorespiratory fitness if workers choose to use the time previously allocated for a passive commute for beneficial health behaviors such as physical activity or for preparing healthy meals. --- Economic Resources In terms of cost savings and economic resources related to telework, much of the empirical literature is rooted within the overall business impact. Some research has addressed the role of the economic resources for the employee within the teleworking context. For instance, prior research has shown that although the reduced commute time and availability of ICT play a minor role in predicting the choice to telework [57], reductions in commute time might also reduce or alleviate the financial strain of paying for gas, road tolls, and public transit. Although some research counters this assumption by showing that teleworkers drive more non-work miles than non-teleworkers [58][59][60]; often replacing commute miles with nonwork trips such as to the grocery store or to run other errands that would often take place alongside one's daily work commute. When considering an occupational health perspective, future research might consider the impact that one's financial and personal resources have on the effectiveness of telework utilization. Factors such as the availability of updated computer technology, a high-speed wireless connection, who bears the cost of technology , and whether workers' have sufficient space within the home for a separate workspace might contribute to workers' well-being. --- Ergonomic Resources --- Training Organizational concern for worker health and safety should not differ between the home or traditional office space [61]. However, there is little empirical literature related to telework and ergonomic factors. Teleworking employees often have little awareness and knowledge related to ergonomic and safety issues within their homes [17]. In addition, many companies lack sufficient regulation and policies regarding the set up and ergonomic evaluation of in-home and remote workspaces [62]. Compounding these issues is the lack of reliable injury frequency and severity reporting for teleworking employees [17,26]. These factors are surprising, as in-home and remote workers are still performing workrelated duties and injuries incurred while working may still be covered through Worker's Compensation [62]. When teleworkers are not provided with the proper ergonomic training and resources, such as an organization-provided ergonomic workstation, sufficient technical assistance to evaluate and adjust one's workstation as needed, and training over best ergonomic and/or telework practices, they incur increased musculoskeletal and psychosocial risks [62]. Prior research has shown that teleworkers often set up their own telework spaces and engage in risky behaviors such as working from the couch or other uncomfortable workspaces. Harrington and Walker [63] state that without appropriate training, workers are likely to be unaware of the risk that these and other home-working behaviors may have on their potential to develop chronic musculoskeletal disorders. Accordingly, home office ergonomics training has been shown to improve workers' knowledge, attitudes, and ergonomic practices. Furthermore, ergonomics training is associated with less pain and discomfort for workers receiving the training. In addition, workers who receive teleworkspecific training adjust faster to teleworking than those who do not receive training [55]. This indicates that ergonomics training may have beneficial effects for both teleworker physical and psychological health. --- Information and Communication Technology Based on the JDR model, ergonomic training and sufficient computer technology and assistance are job resources which can alleviate adverse occupational health outcomes. Suh and Lee [64] show the interaction between one's technology as well as how the characteristics of their job can lead to technostress, or stress caused by information computer technology. Technostress can then lead to reductions in job satisfaction. For example, in Wang et al.'s [34] study of teleworkers in Japan during the COVID-19 pandemic, the top inconveniences reported were related to insufficient technology and desktop space as well as slow internet speeds. Furthermore, as the prevalence of teleworking increases, so might the prevalence of virtual work meetings among members of an organization. With this in mind, researchers have begun to note the association between forms of ICT usage for conducting meetings and tasks among remote workers and the potential effects for employee health and wellbeing. For example, virtual and video-based meetings can lead to fatigue [65], and subsequently reduce their engagement and likelihood to voice concerns in the workplace [66]. These effects are particularly pronounced for women and newer workers who may be more concerned with impression management. When evaluating these results through the lens of the macroergonomics systems approach, we can further understand how the organization of technology and personnel subsystems can affect the trajectory of worker health and well-being outcomes. --- Organizational Factors --- Support for Telework The amount of support an individual receives from their organization also plays a role in facilitating employee well-being. Using a socio-technical systems approach, similar to the macroergonomics systems approach, Bentley et al. [67] hypothesized that when employees perceive support by the organization addressing technical, person, and organizational subsystems, it is likely that teleworking will relate to better worker wellbeing . The authors included both measures of organizational social support and telework support. Telework support is organizational practices which support the effective practice of teleworking. These practices include trust and resources provided by one's supervisor for teleworking, as well as the amount of technical support provided to the teleworker. In line with their hypothesis, both organizational and telework support significantly increased job satisfaction while also decreasing psychological strain for teleworking employees. In addition, organizational support was associated with lower levels of social isolation. These results support the fundamental assumptions of the socio-technical systems and macroergonomics systems approach which suggest that the effectiveness of telework is associated with how well the organizational, person, and technical systems of the organization are aligned. --- Formality The formality of one's telework arrangement established by organizational policies or arrangements with one's supervisor may contribute to a worker's overall telework experience. More flexibility and informal arrangements may increase worker's sense of autonomy and perceptions of support versus formal policies which delegate when, where, and how one engages in telework. Accordingly, Kossek et al. [47] recommended future researchers to identify the formality of workers' telework arrangement in their studies. In their study, having a formal telework policy was related to higher performance ratings. However, having a formal policy was also associated with higher levels of depression, with the exception of women with children. The use of formal telework arrangements have also been associated with higher reports of employee job satisfaction when compared with informal arrangements [68] although these differences were only statistically significant for women. Following Kossek et al.'s [47] recommendation, future work should continue to balance workers' perceptions of flexibility against the formality of their telework arrangements. --- Summary Our review indicates there are a variety of factors which relate to whether and why individuals may engage in telework. For example, gender is associated to some degree with teleworkers' comfort and psychological well-being while teleworking. Men typically have healthier teleworking experiences than women, although these findings seem to be related to work and family roles rather than gender per se. For example, the results indicated that women are more likely to juggle multiple roles when teleworking [19], which is consistent with traditional gender role norms [69]. Other factors, such as reductions in commute time, location of where the work is performed, occupation and industry, job characteristics organizational support, and access to working computer technology and ergonomically designed workstations also have an impact on workers' telework experiences. Outcomes associated with telework are described in the next section. Research evaluating the physical health of workers as a result of telework is still sparse and equivocal. According to Henke et al. [41], the extent of telework is beneficially associated with employee health, with teleworking employees having a lower overall risk of poor health than non-teleworkers. Similarly, in a study by Lundberg et al. [21], both men and women had lower systolic blood pressure, a known stress indicator, when teleworking versus working from the main office, although this association was only significant for women. However, reduced blood pressure for teleworkers may be due to reduced physical activity, rather than reduced stress. Though limited, these results suggest there may be pros and cons for employee physical health when teleworking, and future research should aim to identify which health behaviors and job resources may alleviate demands associated with worker health and well-being while teleworking. --- Health Behaviors Research with a specific focus on employee health behaviors within the teleworking context has only recently emerged, and only a few studies have identified the role of telework in predicting employee health behaviors such as physical activity, nutrition, and substance use. As briefly mentioned in prior sections, Henke et al. [41] found teleworking employees to be at a significantly lower risk of poor nutrition, physical inactivity, and tobacco use than non-teleworkers. Furthermore, workers performing 50% or more of their teleworking hours during traditional hours were at a significantly lower risk of alcohol abuse. On the other hand, those who telework during non-traditional hours or over the weekend were at a higher risk of alcohol abuse than both those who telework during traditional hours and non-teleworkers. More recent research conducted during the COVID-19 pandemic [70] also identified changes in substance use behavior, although it is difficult to assess the extent to which the increase in substance use was related to work or other factors associated with well-being. In general, the positive association between telework and healthy behavior is in line with the empirical literature on workplace flexibility , where employees reporting higher levels of flexibility also reported higher frequencies of physical activity [71]. Similarly, results from Allen et al. [72] indicated that greater flexplace flexibility is associated with less fast-food consumption. With regard to health care utilization, results from Butler et al. [73] showed insignificant differences in health care utilization between those with higher and lower workplace flexibility. Nonetheless, we did not find any articles referencing this relationship within a telework-specific context. In relation to worker sleep health and hygiene, employees reporting higher levels of flexibility at work also reported a higher number of hours slept on average [71]. Furthermore, workers transitioning from the office to telework in Japan during a mandated COVID-19 lockdown reported getting more sleep after their transition to home-based telework [74]. Nonetheless, it is important to note that although there are proper sleep hygiene behaviors that may help workers' well-being, and sleep duration and quality can also be affected by factors such as chronic health conditions, work schedules, presence of children in the home, and other individual differences [75]. When looking at differences in sleep duration and quality, future telework researchers should consider these differences. --- Musculoskeletal and Pain Symptoms Much of what we know regarding musculoskeletal symptoms and telework is related to extended computer usage. Teleworkers typically rely on computer technology as their main mode of task completion, and computer use is associated with extended static postures, repetitive movements, and wrist and forearm fatigue [55]. Subsequently, these factors are associated with the development of musculoskeletal symptoms and disorders within the neck, wrist, shoulders, hands, and lower back. Although adequate computer workstations and ergonomics training may alleviate these risk factors [63], in Montreuil and Lippel's empirical study [55], 54.5% of teleworkers experienced pain symptoms in their upper limbs, back, and neck which they contributed to inadequate computer and workstation furnishings. Furthermore, the lack of interruptions and face-to-face interaction when teleworking may lead to a reduction in work breaks or longer working hours for some workers, which may also strengthen the likelihood of developing musculoskeletal symptoms. Lastly, there is speculation that psychosocial aspects of telework including time constraints and a lack of social support may lead to the development of musculoskeletal symptoms among teleworking employees [55,76]. --- Mental Health There is an obvious lack of empirical investigations into telework that include measures of anxiety, depression, and other indicators of mental health. The research that is available is fairly equivocal. For example, Henke et al. [41] reported employees teleworking eight hours or less a week were significantly less likely to experience depression than non-teleworkers. On the other hand, Mann and Holdsworth [19] found teleworking employees to experience significantly more mental health symptoms related to stress as measured by the Occupational Stress Indicator [77]. The difference in mental health outcomes between these studies may be due, in part, to differences in the extent of telework practiced by participants. Henke et al. [41] surveyed participants across a spectrum of weekly telework hours, whereas the participants within Mann & Holdsworth were either full-time teleworkers or full-time office workers [19]. --- Psychological Well-Being Psychological well-being refers to attitudes and experiences workers have related to their overall well-being, such as job satisfaction, life satisfaction, and burnout. Literature relating to the psychological well-being of teleworkers is also largely indeterminate, although there is a general agreement that job characteristics play a large mediating and moderating role in predicting teleworker psychological well-being. However, there is also an abundance of measures which investigators used to evaluate teleworker well-being , and it is this inconsistency which may be contributing to the equivocation of results regarding the relationship between telework and worker well-being. However, certain trends do emerge within the literature. For example, Song and Gao [78] found telework to be associated with lower levels of tiredness and Sardeshmukh et al. [11] report a significant, beneficial association between telework and exhaustion, partially mediated by job demands and job resources . However, Sardeshmukh et al. [11] also reported a significant negative association between telework and job engagement, partially mediated through the same demands and resources. Similarly, Vander Elst et al. [33] did not find a direct relationship between the extent of telework and work-related well-being indicators , but the authors did find an indirect, negative relationship between the extent of telework and work-related well-being via lower levels of social support. Thus, teleworkers who teleworked more days a week experienced less social support, and in turn experienced higher levels of adverse well-being indicators. These results support those in Sardeshmukh et al. [11] where social support was a prominent mediator between telework and well-being, and also reflect findings that suggest that teleworkers may experience higher levels of exhaustion when teleworking at high intensities [79]. However, participants in the latter study only experienced increased exhaustion when also experiencing high-levels of work-family conflict. In a different study, Duxbury and Halinski [80] found the extent of telework to negatively moderate the positive association between the number of total hours worked and work strain . Telework was shown to help workers with high job demands alleviate the negative influence of those demands on their work-related wellbeing. This process may be due to an increase in job control when utilizing their ability to telework. These results complement reports from Perry et al. [43] in which employees high in emotional stability and high in perceived autonomy experience the least psychological strain when teleworking, regardless of the amount of time spent teleworking each week. Finally, in a recent quasi-experimental, daily-design study, participants from Belgium who teleworked up to two days a week as part of a two-week intervention reported lower levels of perceived stress post-intervention, as well as lower levels of perceived daily stress on days when those participants teleworked [81]. Teleworking participants also reported higher levels of work engagement on days spent teleworking; however, participant overall work engagement did not change compared to pre-intervention. Considering the previous discussion, more research is needed to uncover the intricacies in the relationship between telework and worker psychological well-being. However, the majority of evidence seems to suggest the beneficial association between telework and psychological well-being may largely rely on the design of one's teleworking arrangement, as well as the job resources in place for mitigating personal and family demands. --- Social and Family Outcomes 6.2.1. Work/Family Conflict and Balance Research into the extent to which telework is beneficial or detrimental to balancing work and family is largely equivocal in the results. From one perspective, providing flexible work arrangements such as telework is seen as a way to increase work-life balance and reduce work-non-work conflict [2]. In particular, telework might lead to reductions in commute time, perceived flexibility over both one's workplace and work schedule, as well as opportunities to manage familial and personal responsibilities such as a child home sick from school without having extreme disruptions from work which may help promote employees' perceptions of work-life balance. On the other hand, telework inherently blurs the spatial boundaries between work and home, thus increasing the likelihood of workfamily conflict [19]. For a review of work/nonwork outcomes and empirical literature prior to 2015, See Allen et al. [2]. Since the start of the COVID-19 pandemic, more studies have investigated work-family conflict as an outcome of telework. First, traditional gender dynamics have seemingly held during the COVID-19 pandemic with important implications for worker health while teleworking [82]. Shockley et al. surveyed heterosexual, dual-earning couples upon the beginning of the COVID-19 pandemic, as well as during a 2-month follow-up. A substantive proportion of couples reported maintaining historical gender norms relating to how they managed work and family during the COVID-19 pandemic. For these couples, in which the wife worked remotely and was also solely responsible for childcare without alterations to their husband's work schedule or location, women reported significantly poorer outcomes, including the lowest ratings of family cohesion, the highest ratings of relationship tension, and the poorest reports of job performance, even when compared to women who were the sole remote worker but received at least partial assistance in maintaining childcare responsibility from their husbands, as well as those outsourcing childcare. Men in these relationships also experienced adverse outcomes and reported the lowest levels of family cohesion and the highest levels of relationship tension. Conversely, when couples chose to alternate in-person working days as well as childcare responsibilities, they also experienced the best relational and performance outcomes. Overall, results from Shockley et al. perhaps demonstrate the nuanced role of telework as a flexible work arrangement and also speak to the need for family-supportive practices such as provisional resources for childcare or flexible work schedules, which might better support women and men looking to balance both work and family. Even so, teleworking in light of the COVID-19 pandemic may have silver linings for closing the gender gap in familial childcare overall, despite potential effects for familial strain or job performance. During the COVID-19 pandemic, teleworking fathers increased the amount of time spent engaging in childcare overall , more closely representing the time typically spent by teleworking women [83]. Similarly, in Pineault et al.'s [84] study of dual-earning heterosexual couples, women undertook significantly more physical and cognitive household labor when both members of the couple worked outside of the home, versus when both members were teleworking. However, women still assumed 60% or more of both cognitive and physical household labor than men, regardless of where each member chose to work. --- Interpersonal Relationships Although a primary assumption of teleworking is that it affords employees more autonomy and flexibility, authors have discussed a paradox in which the increased control teleworking affords is undermined by a negative association with outcomes in the social domain. For example, Mann et al. [19] present a social comparison effect in which participants report a tendency to look to others in order to derive behavioral norms [85]. The reduction in face-to-face communication and a reliance on ICT can, therein, lead to adverse social effects for teleworkers. Subsequently, teleworking employees may experience more negative emotions than office-workers, largely due to feelings of loneliness and social isolation. However, in other cases, a lack of face-to-face interaction may be seen as a benefit to telework [40,86]. In Collins et al.'s [86] qualitative study, participants welcomed the opportunity to be removed from the social environment of the office. For these employees, telework afforded them the opportunity to avoid social conflict or negative office relationships and, in turn, foster positive work relationships. However, the longer and more frequent employees teleworked, the less connected they felt with their office-based co-workers and oftentimes did not forge new office-based relationships beyond those established prior to the commencement of their telework arrangement. Thus, over time, teleworking employees may begin to experience a reduction in their social support network. Considering this notion, both researchers and practitioners should not only account for the extent of telework, but also how long one has been teleworking when evaluating interpersonal outcomes associated with telework. --- Work-Related Outcomes 6.3.1. Job Satisfaction Research investigating the association between telework usage and job satisfaction is vast. There seems to be consistent agreement that telework is associated with increased job satisfaction [39,87,88]. In a meta-analysis of 28 studies, Gajendran and Harrison [88] indicated a positive relationship between telework usage and job satisfaction. However, Golden and Veiga [36] reported a curvilinear association between the amount of time spent teleworking and job satisfaction, such that the positive association between these variables plateau at higher levels of telework hours. Research revealed that these relationships are defined by a variety of mediators and moderators. For instance, Gajendran and Harrison [88] also showed that perceived autonomy fully mediates the relationship between telework and job satisfaction, while both work-family conflict and relationships with supervisors are partial-mediators. In addition, the curvilinear link between the extent of telework and job satisfaction in Golden and Veiga's [36] study was moderated by both task interdependence and job control. Additionally, Golden [37] showed leader-member exchange quality, team-member exchange quality, and work-family conflict to mediate the curvilinear relationship between the extent of telework and job satisfaction. The importance of job characteristics in helping to define the relationship between telework and job satisfaction has been modeled in numerous other studies. For instance, participants who were teleworking full time from India during the COVID-19 pandemic reported the highest levels of job satisfaction when also reporting high levels of job autonomy and family supportive supervisory behaviors [89]. Specifically, when participants reported high levels of job resources, they also reported high levels of work-life-balance, subsequently leading to higher levels of job satisfaction. Furthermore, participants reported the highest levels of job satisfaction when they not only perceived high levels of job resources, but also reported having at least some experience teleworking prior to the COVID-19 pandemic. Finally, Fonner and Rollof [40] show reduced disruptions from colleagues and office politics when teleworking to positively impact teleworker job satisfaction. In addition, media richness [90] and stress related to technology usage [64] also positively contributes to teleworking employees' job satisfaction. These latter components play a part in our understanding of the telework and job satisfaction relationship through the macroergonomic perspective. Thus, organizations may not expect positive work and health outcomes without taking both the technological and human subsystems into consideration. --- Absenteeism/Presenteeism Although limited, research has shown that providing workers with the option to work away from the central office is associated with reduced absenteeism. However, this reduction in absence may be tied to a health trade-off for employees who continue to telework when sick . For instance, while an employee may choose to telework when feeling unwell in order to prevent the spread of communicable disease and also avoid absence at work, it is likely that their work performance may be less than optimal and could slow their recovery and thus may be less effective when choosing to continue working while feeling unwell. While this process, referred to as presenteeism, is often a challenge for both office and home-workers, the option to telework exacerbates the opportunity for presenteeism by removing the physical presence of the employee [19]. Steward [91] also showed that this form of invisibility was related to workers' likelihood of working while sick. Interview participants and survey respondents indicated that their lack of presence in the office made it harder for them to justify the need to take a formal sick day, and often employees chose to continue working despite malaise. Mann and Holdsworth [19] suggest that employees may also feel lucky, or "privileged", to work from home and choose to work through sickness in order to preserve their opportunity to telework. Thus, it is not possible to interpret reductions in absenteeism as an indication of positive health status among teleworking employees [91]. --- Summary There is still much to be known about the health effects of telework. Much of the literature about physiological and musculoskeletal outcomes of telework is indeterminate. Telework has both beneficial and adverse effects for worker health and well-being. Telework outcomes seem to be regulated by working context and job characteristics, including autonomy and support. The extent of telework also plays a primary role in predicting the job satisfaction and overall health of teleworkers. Next, we review the moderators and mediators which help to define our understanding of what happens to employee health when one utilizes telework. --- Mediators --- Job Characteristics Much of what we know relating to the mediating role of job characteristics in the relationship between telework and worker health and well-being is in relation to worker autonomy. Telework is directly associated with higher perceived autonomy, or control over how one completes their work [88], and perceived autonomy is among the strongest job characteristics when explaining the relationship between telework and employee outcomes. Gajendran and Harrison found autonomy to fully mediate the positive effects of telework on job satisfaction and partially mediate the impact of telework on employee stress . Considering the JDR framework, autonomy also partially mediates the impact of the extent of telework and both exhaustion and job engagement [11]. Sardeshmukh et al. suggest that the mediating role of autonomy is due to the lack of constraints linked to office routine, the ability to navigate when tasks are completed during the day, and potentially less managerial oversight. These components allow employees to conduct their work tasks in line with their own preferences, reducing exhaustion and alleviating psychological strain. 7.2. Social Context 7.2.1. Relationship Quality Despite previous findings regarding the isolating impact of telework on employee wellbeing [19], Gajendran and Harrison [88] found a positive relationship between telework and employee-supervisor relationships. In their meta-analysis, the quality of the teleworkersupervisor relationship was shown to partially mediate the relationship between telework and job satisfaction and turnover intentions. Thus, across studies, teleworking employees reported a beneficial impact of teleworking on relationships with their supervisors, and subsequently greater reports of job satisfaction and lower reports of turnover intentions. The importance of workplace relationships for teleworking employees was also highlighted in Golden's [37] study, wherein the quality of exchanges with one's manager, coworkers, and family mediated the curvilinear relationship between the extent of telework and job satisfaction. Specifically, job satisfaction increased when workers reported positive relationships with their managers and team-members before plateauing or slightly decreasing among those reporting the strongest relationships. For family relationships a higher level of telework intensity was related to decreased work-family conflict, which in turn was associated with higher levels of job satisfaction, with a slight tapering when workers reported low levels of work-family conflict. These results are reflected in other studies in which high quality superior-subordinate relationships are related to higher levels of job satisfaction, and are also moderated by the level of telework intensity [92]. While these results are largely in favor of increased telework intensity, limited face-to-face interactions and social isolation may contribute to the plateau in job satisfaction at higher intensities of teleworking [36]. --- Social Support In contrast to the findings that telework is related to a more positive employeesupervisor relationship quality, Sardeshmukh et al. [11] found that the extent of telework was related to reductions in social support, and subsequently reduced worker engagement. This relationship may be due, in part, to reduced media richness and an increased physical distance between teleworking employees and coworkers. Media richness refers to how effectively a variety of ICT transmit social cues and mitigate uncertainty and equivocality between users [93]. ICT, such as videoconferencing, are higher in media richness than standard text communications as there is at least some transmission of social cues [2]. Sardeshmukh et al. [11] suggest that reducing employee perceptions of isolation and loneliness through more rich communication media may increase perceptions of social support and could mitigate the negative impact of high intensity telework on work engagement. --- Social Isolation Building on results from their study in which organizational and telework support were associated with more positive employee well-being outcomes, Bentley et al. [67] furthered their investigation by evaluating the mediating role of social isolation between support and employee well-being. Social isolation was found to partially mediate the relationship between organizational support and employee job satisfaction and psychological strain. Thus, when organizational support is insufficient, the negative influence of social isolation associated with the use of telework can increase psychological strain and reduce job satisfaction. Organizations should provide support by means of face-to-face interaction and ensure employees have access to sufficiently rich media for interacting with other employees in order to combat the effects of social isolation when teleworking [19]. --- Summary The role of job characteristics and the social context of work as mediators within the telework and worker health and well-being relationship is still a relatively young vein of research. While we know that both autonomy and relationship quality play a supportive role in the relationship between telework and employee well-being, reductions in perceived social support may contribute to adverse outcomes. In addition, social isolation can undermine the positive effects the utilization of telework has for employee well-being. As research continues to unpack the job characteristics and social contexts which contribute to the telework and worker health and well-being relationship, there is a likelihood that other job characteristics and social components play a mediating role when determining employee outcomes. --- Moderators --- Gender Previously, we described gender as an antecedent. Given that women are more likely to assume multiple roles when teleworking [19], gender is likely to act as both an antecedent and moderator in the relationship between telework and worker health and well-being. For instance, during the COVID-19 pandemic, women spent more time teleworking overall, but also spent more of that time completing work in the presence of children and attending to housework than men [94]. Thus, as women participate in home-based telework, the beneficial impact of teleworking for work-life balance may be attenuated by way of reinforcing gender roles [95], with teleworking women reporting less work-life-balance than non-teleworking employees [96]. Meanwhile, men are more likely to work within their roles independently and experience less stress and negative affects while teleworking [19,69,78,83]. For instance, teleworking mothers during the beginning of the COVID-19 pandemic reported higher rates of anxiety, depression, and loneliness than teleworking fathers, who actually experienced reduced anxiety when teleworking from home [83]. Men have also shown higher ratings of mental and physical health while teleworking in general, although these differences are not statistically significant when compared with traditional office workers [19]. --- Extent of Telework Extant literature shows that the extent of telework also acts as a moderator among a number of occupational health outcomes. For instance, the number of hours telecommuted moderates the relationship between total working hours and work strain measured through role overload [80]. Similar conclusions were found in Gajendran and Harrison's metaanalysis [88], wherein telework intensity beneficially moderated the relationship between telework and role stress. Higher intensity teleworkers experienced reduced role stress. In addition, high intensity teleworkers also experienced reductions in work-family conflict. A number of other studies [36,92] discuss the moderating effect of the extent of telework on various work-related outcomes. For a more thorough review of these articles, we refer readers to Allen et al. [2] 8.3. Job Characteristics 8.3.1. Autonomy Autonomy also serves as a moderator within the telework and worker health and wellbeing relationship. As previously discussed, prior research reports that there is a curvilinear link between the extent of telework and job satisfaction [36,37,92]. This work has also established the role of job discretion and the amount of autonomy an employee has in how they perform their jobs as a moderator within the relationship between extent of telework and job satisfaction [36]. Individuals who have higher levels of job discretion also show a tendency to have higher levels of job satisfaction across the extent of telework spectrum. Additionally, job autonomy moderates the relationship between telework and workfamily conflict, such that individuals with higher job autonomy experience less work-family conflict in the general context [97]. Interestingly, higher levels of job autonomy do not lead to a faster decrease in work-family conflict per additional hour of telework each week. In fact, individuals with lower autonomy see a more distinct decline in work-family conflict in relation to the extent of telework. Golden et al. suppose that this differential decline in WFC is potentially due to individuals with a lower job autonomy taking advantage of the saved time due to extensive telecommuting in order to reduce WFC. Finally, autonomy plays a role in determining stress outcomes for teleworking employees. Perry et al. [43] showed that when individuals have high levels of emotional stability and high levels of autonomy, they experience less strain overall regardless of the extent of telework. Conversely, when individuals are low in emotional stability and high in autonomy, they are predisposed to higher levels of work stress, leading to strain, which is likely to increase as they telework more hours. --- Flexibility Much of what we know regarding the positive benefits of flexible work arrangements on employee health and well-being is drawn from the general workplace flexibility literature [71,73]. Although autonomy seems to play a strong role in determining teleworker health outcomes, less is known about flexibility components specific to telework. What we do know is that transitioning to telework has been reported as leading to greater perceived flexibility for employees [98]. In their qualitative, quasi-experimental study with employees from IBM, Hill et al. found that the transition to telework from office work increased employees perceived flexibility, which therein increased their employee's personal/nonwork life. Golden and Veiga [36] included a measure of flexibility, work-schedule latitude, in their investigation of the moderating effects of work characteristics on the curvilinear relationship between extent of telecommuting and job satisfaction. Although there was no significant moderating effect found in their analyses, the authors stipulated that as their sample included salaried professionals, it may be that flexibility in scheduling their work tasks is a common aspect of professional work and is not easily reflected in measures of job satisfaction. --- Task Characteristics Task characteristics, specifically task interdependence, play a role in moderating the relationship between the extent of telework and job satisfaction [36]. Task interdependence refers to the extent an individual is relied upon, or relies on others, to complete their job tasks [99]. When teleworking, individuals whose work is highly interdependent may experience more frustration due to continuous back and forth communication with other members of their organization needed to complete their own work tasks [100]. Golden and Veiga [36] reported that individuals with lower levels of task interdependence typically have higher levels of job satisfaction. This relationship follows the typical curvilinear trend found between extent of telework and job satisfaction. Specifically, individuals with higher task interdependence reflected a slower increase in job satisfaction and the differential increase between job satisfaction for those with low versus high task interdependence was more pronounced at a higher extent of telework. Task interdependence might also influence levels of exhaustion among teleworkers. During the COVID-19 pandemic, many workers experienced daily task setbacks . Chong et al. [101] demonstrate how day-to-day, work-related setbacks specific to the pandemic are associated with higher levels of employee exhaustion at the end of the workday, and these associations are further exacerbated when employees engage in highly interdependent work. However, when employees reporting high levels of exhaustion at the end of their workday also reported receiving organizational telework task support, they did not report significant levels of withdrawal from work on the following workday. Conversely, when employees reported low levels of organizational telework task support, there was a significant, adverse association between end-of-day exhaustion and withdrawal from work on the following workday. To date, these are the only empirical investigations we have found that investigated the impact of task characteristics on the link between telework and worker health and future research would benefit from additional studies evaluating telework, task characteristics, and worker health and well-being. For instance, future research might utilize O*Net, an occupational information database, to evaluate common work activities and their subsequent effects, performed by workers within their samples. --- Voluntariness Both formal and informal telework arrangements might benefit workers when they have a choice about whether or not they have opportunities to telework. For instance, voluntary telework, versus mandated or involuntary telework , is associated with higher levels of job satisfaction and lower levels of turnover intentions and perceived stress [102]. Furthermore, voluntary telework supports employee perceptions of autonomy by allowing them to control their desired degree of integration and segmentation between work and family and nonwork domains [103]. This is consistent with past research that underscores the importance of giving workers autonomy or control over their jobs [8]. --- Boundary Preferences Boundary preferences might also play a moderating role in the relationship between telework and worker health and well-being. Workers' preferences for separating work and non-work experiences or combining work and non-work roles can shape their experiences and outcomes associated with telework. For example, Derks et al. [104] investigated boundary management preferences as a moderator in relation to work-related smartphone use, work-family conflict, and family role performance. Their results found no association for segmenters, but integrators experienced less work-family conflict and better family role performance. --- Summary Job characteristics, and especially autonomy, are important for fostering positive worker health and well-being. Job autonomy is related to better job satisfaction, less work/family conflict, and reduced worker stress. There is also some evidence for a beneficial impact of perceived flexibility, although the current state of evidence is equivocal. Task characteristics, such as task interdependence, also play a role in determining the directions of employee outcomes when utilizing the option to telework, although further investigation is needed to holistically identify characteristics beyond task interdependence which contribute to worker health and well-being. --- Discussion Overall, what we know about the relationship between telework and worker health and well-being is variable and seemingly dependent on a variety of job characteristics and contextual and technological factors. Understanding the influence of job demands and resources is integral to understanding the relationship between the utilization of telework and employee health and well-being. However, the extant literature has also demonstrated the importance of designing sub-systems that complement one another in order to obtain successful telework outcomes. When organizational, person, and technological subsystems are designed thoughtfully and intentionally, we may expect not only better productivity, but also better worker health and well-being. Within our review, we identified a number of antecedents, outcomes, mediators, and moderators at the organizational, job, work/family/life, and individual level that explain the relationship between telework and worker health and well-being. Although some individual characteristics such as gender and personality help to predict teleworker health outcomes, both job characteristics and organizational support and practices also play strong roles in predicting teleworker well-being. The extent of telework is also a primary factor related to worker outcomes, with employees teleworking approximately 40% of their working hours experiencing the most favorable outcomes. Furthermore, job characteristics such as autonomy serve as important mediators and moderators within the telework and worker health relationship. Jobs characterized as having more autonomy and control are associated with better worker outcomes, and these effects also hold true regarding whether workers choose to engage in telework. We also discuss how the social context, wherein telework is performed, helps to further define the telework and worker health and well-being relationship. Workers' relationships with supervisors, coworkers, and family members, as well as feelings of social isolation, can either benefit or detract from their health and well-being when teleworking. With regard to the physical health and psychological outcomes of teleworking, much of the literature is equivocal. Though limited, the available research suggests that teleworking leads to positive health outcomes such as lower blood pressure and decreased health risks in some samples. Although, working longer or nonstandard hours due to the increased control and flexibility which telework provides may undermine these outcomes by elevating stress levels. In addition, exposure to extended computer usage and poorly designed workstations can lead to musculoskeletal and pain symptoms in teleworking employees. Preliminary evidence provides support for the utilization of telework in increasing positive employee health behaviors such as physical activity, sleep, and nutritional choices. However, our understanding of the mental health outcomes related to telework is less clear. One challenge associated with trying to understand outcomes related to telework during the COVID-19 pandemic is that it is difficult to identify specific outcomes associated with telework and other factors that have co-occurred during this time That said, employees teleworking eight hours or less may be at a decreased risk of experiencing depression, while those working extended telework hours may experience depressive symptoms at the hand of social isolation and reduced social support. Similar findings have been reported in relation to psychological well-being. In some cases, the reductions in social support associated with telework usage leads to lower levels of job engagement. On the other hand, those utilizing telework to reduce job demands see positive effects for employee well-being. Finally, we identified a number of social and work-related outcomes associated with engaging in telework. The flexibility and control associated with telework may help bolster work-life balance and reduce work-family conflict. However, reductions in spatial and temporal boundaries between work and home may increase the likelihood of family-towork conflict and increase stressful experiences for workers inside of the home. These outcomes seem to largely relate to one's boundary management style. Considering workrelated outcomes, teleworking employees often report increased job satisfaction, although these reports slightly taper or plateau when employees begin to work extensive hours via telework. In addition, telework is associated with reduced absenteeism, but this relationship may be due to a health trade-off in which employees more frequently continue working when sick away from the office out of concern for losing their ability to telework in the future. Nonetheless, the bulk of these findings indicates the importance of building a multi-disciplinary understanding of the relationship between telework and worker health and have important implications for the design of teleworking arrangements now and in the future. --- Recommendations for Policy and Organizational Practice One of the challenges with telework is that there are no federal regulations about the use or implementation of telework. Instead, work arrangements are often left up to an individual employer, or in some cases, one's supervisor [105]. First, we acknowledge and understand that not all jobs are conducive to telework. However, experiences during the COVID-19 pandemic have demonstrated that telework may be more feasible for some jobs than previously thought. Our review highlights the importance of having a clear, well-communicated and inclusive telework policy. --- Telework Policy When considering telework policy, organizations are ultimately tasked with balancing the flexibility and formality. Overall, informal, or as-needed, telework arrangements are more likely to increase worker perceptions of autonomy and flexibility; however, formal arrangements can benefit employee performance as well as perceptions of job control when presented as a flexible support mechanism [47]. Regardless of the formality of an organization's telework policy, supervisors and human resource personnel should ensure clear and consistent criteria for establishing who is eligible to telework, the location of telework, as well as when and how often an employee may telework. Furthermore, practitioners should be careful to not provide telework as a replacement for formal familysupportive or other support policies such as paid sick time, as employees may be more likely to experience work-family conflict and presenteeism, subsequently impacting employee health, well-being, and performance. 9.1.2. How Often, When, and Where to Telework Research to date indicates that the optimal time spent teleworking is approximately 40% of one's overall working hours, equating to two 8-h workdays under the conventional 40-h work week. However, organizational leaders and managers should recognize that working more time via telework does not lead to detriment in worker health or performance, but likely other factors prevent further gains in worker satisfaction. Furthermore, leaders should consider when an employee chooses or is scheduled to telework. Workers have been shown to experience adverse health effects when telework is used as a mechanism to catch up on work after hours or over the weekend. Telework is most beneficial for employee health and well-being when provided as a flexible support and not in lieu of formal support such as paid time-off. Supervisors should remain aware of their employees' workload, work hours, and teleworking behaviors in order to mitigate instances of employees teleworking after hours to "make up" for missed work due to scheduled work time off, such as vacation, paid time off, or sick time. Workers are most likely to experience the beneficial outcomes for health and wellbeing when teleworking from a location where they have the greatest level of control of the work environment. For instance, workers may be able to better control the levels of lighting, noise, and temperature within their homes than a library or co-working space. However, certain aspects of working from home such as separating family and nonwork responsibilities or interruptions from one's work domain may be more difficult. Therefore, workers will likely benefit from choosing a consistent space within their homes to perform their work where non-work activities do not take place and they can best separate their work and home domains. For instance, when possible, workers may choose to work from a separate room where they may close the door at the end of their workday, or work from a table in a shared room specifically designated for work tasks. --- Managing Boundaries When Teleworking Telework reduces the likelihood for work interfering with employees' nonwork domains [46], but also provides a greater opportunity for nonwork interference during one's workday. Along with ensuring a separate physical space within one's home where work is performed, workers might also set clear expectations for both work and nonwork communications when teleworking. For instance, in order to circumvent employees working after hours when teleworking, both supervisors and employees should discuss the expectations for responding to work-related communication. Additionally, workers should communicate their expectations to family and friends when teleworking in order to mitigate the likelihood for non-work interruptions. --- Training, Technology, and Ergonomic Support As organizations continue to implement telework policies within their workplaces, special attention should be given to the educational, technological, and ergonomic resources provided to employees. Practitioners should strive to implement telework training and provide sufficient ICT to the greatest extent possible, so that employees feel that they have adequate resources to meet the demands they experience when teleworking and also reduce stress invoked from insufficient technology. One option which organizations might consider is the quality of wireless internet employees have within their homes, and the extent to which their organization can help in ensuring that their wireless connections are adequate for the work they are expected to perform. Organizations may also consider and provide workers with a practical stipend to purchase, or otherwise provide, ergonomic furnishings for their in-home workspaces, such as an ergonomic work chair which they may be accustomed to when working from their organizational setting. Another recommendation is to provide professional ergonomic assessments for teleworkers. Ensuring employees have a healthily designed workspace in their homes will likely reduce the development of musculoskeletal and pain symptoms associated with poorly designed workstations. --- Retaining Autonomy The onset of the 2019 coronavirus pandemic has led to extreme shifts in the way that work is organized and performed. For instance, in many cases, employees have no longer received the choice to telework and were mandated to work from home by either state or organizational guidelines. The removal of the choice of where work is performed may alter what we already know about the importance of autonomy for occupational health. By removing this choice, such that many workers are involved in mandatory telework, there is the potential for increased stress and adverse effects related to employee health and well-being [88]. Under these circumstances, practitioners and leaders will need to identify additional job resources, such as social support or flexibility among workers' schedules, to alleviate the potential for reduced perceptions of autonomy as well as the unique job demands experienced when teleworking by many, such as working in the presence of family or partners. --- Maintaining Social Connections It is likely that many employees are now teleworking beyond the optimal extent of telework , and although research has largely unearthed the "telework paradox", some employees may experience loneliness when teleworking at high intensities. Thus, management and supervisors should aim to provide as many face-to-face interactions with teleworking employees as possible, and especially with new remote employees where face-to-face interactions are crucial for healthy socialization during their first 90 days [106]. Online web conferencing platforms may help supervisors meet these needs, and informal channels such as Slack may benefit employees who value informal and unscheduled interactions with colleagues. --- Future Research 9.2.1. New Normal of Teleworking In the coming years, it is likely we see an increase in the number of full-time regular teleworkers [4]. Occupations that have been traditionally confined to the working office due to organizational norms are now being practiced from home via computer technologies, largely as a result of the COVID-19 pandemic. As many employees may now be working more hours by telework, now is the time for researchers to expand what we know about the extent of telework, as previous studies have rarely investigated full-time or almost full-time teleworking employees. Furthermore, working adults with children are now more likely to be attending to childcare responsibilities as many k-12 schools closed or moved to an online format during the COVID-19 pandemic and some schools have retained these practices within high-risk populations. Thus, our current understanding of the impact of telework on work-life and work-family outcomes may change as a result of the pandemic and new teleworking norms. In a similar vein, a greater number of married and co-living partnerships may both be working from within the home both during and after the pandemic, and studies prior to the COVID-19 pandemic have yet to unpack the intricacies of co-working partners. Future research will need to consider how the changing organization of work and family roles while teleworking impacts employee health and well-being, particularly over the long term. Future research should also aim to investigate ways in which workers' socioeconomic status relates to their teleworking experiences and outcomes. For instance, during the COVID-19 pandemic, many workers transitioned to remote work without necessary or familiar ergonomic and technological resources. Meanwhile, some workers may not have had adequate financial resources for purchasing their own ergonomic workstations or updating their in-home technology. For example, less than half of teleworkers responding to a 2020 global work-from-home survey reported having ergonomic supports such as a sit-stand desk, dual or wide-screen monitors, or ergonomic chair, despite over half of respondents indicating having these supports when working from their physical organization [107]. Additionally, some workers may live in environments that have excessive noise contributing to frequent disruptions when teleworking , or where they are unable to control the micro-climate of their physical location . Thus, future research might also consider socio-economic status as a moderator within the relationship between telework and worker health and well-being. Although we did not include socioeconomic status as a moderator in our conceptual model, the moderating role of socioeconomic status seems plausible. Similarly, future research might consider the potential moderating role of other variables presented in our conceptual model of the relationship between telework and worker health and well-being. While we did not identify empirical articles discussing a moderating role of physical activity or sleep within the teleworking context, it is possible that these factors may alter outcomes when considering teleworker health and well-being, and future research should investigate these issues. --- Underrepresented Groups People with disabilities and chronic health conditions experience a disproportionate burden of unemployment, and the COVID-19 pandemic has begun to exacerbate ability-based differences in employment, with the employment rates of individuals with disability decreasing at a greater rate than those without disability amid the beginning of the pandemic. The Americans with Disabilities Act of 1990 promoted the use of telework when considering the hiring and retention of individuals with disability and CHCs. However, organizations are not required to provide telework as an accommodation or to create a more inclusive work environment unless the nature of the work is deemed acceptable for telework and allows for workers to meet the essential functions of their job [26]. Many courts have ruled against telework as a reasonable accommodation, as teleworking requires an employee to be absent from their central workplace and attendance has often been considered a necessary component to one's job [108]. However, as the prevalence of teleworking employees in general continues to rise as a result of the COVID-19 pandemic [4], telework accommodations for workers with disability may become more likely. For instance, in 2020, the World Health Organization endorsed telework for workers with disability during the pandemic in order to reduce concerns of exposure to COVID-19 [109]. However, there is a dearth of literature investigating the utility of telework as an accommodation practice for these workers. In general, employees recognize telework as a means to alleviate work interference with family and also manage pain or fatigue not associated with disability [110]. Meanwhile, some workers are utilizing telework as an accommodation practice through their employer [111]. However, only half of employees report satisfaction with their telework accommodation, despite a majority reporting that teleworking was beneficial in completing their work tasks. These findings suggest that there may be effects related to health and well-being influencing the teleworking experience of workers managing disability or chronic conditions. A recent study provides preliminary insight into the beneficial mechanisms of telework for supporting workers with disability or chronic health conditions [112]. In this study, participants with disability and CHCs who completed daily measures of job control, flexibility, work ability, and well-being indicated significantly higher levels of job control when teleworking. Increased reports of job-control among participants, in turn, were associated with higher-levels of perceived work ability and well-being. However, this study was conducted during the height of the COVID-19 pandemic when most participants were teleworking at very high-intensities. Future work is needed to determine how telework acts as an accommodation practice among various intensities of telework . Finally, future work ought to extend research to include pregnant women or workers responsible for eldercare. In Australia, the Fair Work Act extends the right to employees with caring responsibilities, a disability, or caring for a family or household member experiencing violence to request a flexible work arrangement in order to effectively navigate work and personal needs [26]. In the United States, the courts may impose and direct an organization to provide telework as an accommodation for pregnant workers [55]. Empirical studies evaluating the utility of telework as a flexible work arrangement under these conditions is needed to ensure that researchers, organizational leaders, managers, and policy makers understand the components of telework which best meet the needs of these special groups. --- Methodology Although interest in designing and implementing telework studies has surged in recent years [2] and even more recently due to COVID-19, much of our current understanding of telework outcomes is constrained by limited methodology. For instance, the majority of the studies included in this review utilized a cross-sectional survey. Although there are ways to optimize the utilization of cross-sectional methods [113], cross-sectional designs are not conducive to investigating change over time. Only a handful of studies included within the current review used longitudinal data. In one example, Vega et al. [39] utilized a longitudinal design in which participants completed daily surveys for five consecutive workdays in order to evaluate changes in job satisfaction, creativity, and performance. Longitudinal designs of this nature allow us to understand the effects of telework from a dynamic and within-person perspective. For instance, future work may look to further evaluate changes in employee health and well-being as a result of working standard versus non-standard working hours. Another methodological concern constraining the field's ability to generalize results across studies is the duration of time an organization has implemented teleworking programs. For instance, it is likely that workers within an organization with a well-established telework program will have differing teleworking experiences than those within an organi-zation newly implementing such programs. Given the number of organizations with newly developed teleworking policies in light of the COVID-19 pandemic, future research might draw upon or develop new theories to understand why differences are seen as a result of how long an organization has offered telework. The types of data and measurement tools used within telework and health studies is also limited. Ambulatory methods which mitigate disruption and also collect objective health data [114], such as wearable blood pressure and heart rate monitors, may be useful in directing researchers to understand stress and other physiological functions as a result of extended telework usage. For example, there is an abundance of non-invasive health tools for measuring sleep, endocrine, and cardiovascular activity. Future researchers might consider the usefulness and benefits of these measurement tools for understanding teleworker health and well-being. Furthermore, when surveys are used to investigate relationships between telework and worker health and well-being, investigators ought to consider the extent to which measures developed for application within the office or traditional work environment also apply among various telework settings. For instance, researchers have begun to consider the applicability of work-family measures which conceptualize work and home as distinct geographic locations [115]. Additionally, when considering employee organizational citizenship behaviors , prior research reports an equivocal relationship between telework and OCBs, potentially due to measurement issues [116]. Since OCBs are typically bound to one's physical work environment, investigators will have to determine if, and how, these behaviors change within a telework context, and subsequently develop and validate appropriate measures. Thus, researchers should consider how their chosen measures lend to the virtual work environment and ensure appropriateness using statistical analyses such as confirmatory factor analyses and testing measurement equivalence between telework and non-teleworking groups. Finally, as the ways and frequencies by which employees continue to telework increase, researchers must also consider and come to a consensus regarding how to measure telework itself. In a recent meta-analysis [46], authors gathered 22 empirical articles investigating the relationship between telework and bi-directional measures of work-family conflict. Among these studies, approximately half measured telework dichotomously , while the remaining studies measured telework continuously Beckel et al.'s results found that measurement differences moderated the relationship between telework and work-family conflict, such that studies including a dichotomous measure of telework exhibited a stronger, negative relationship between telework and work-family conflict. However, dichotomizing variables that can be measured on a continuum may result in a loss of information and can introduce error [117]. Thus, future research should better utilize continuous measures of telework, such as the extent of telework, especially as workers utilize their options to telework differently, ranging from on an as-needed basis to full-time teleworking. Researchers might also consider the inclusion of employer-provided health risk assessment and health care utilization data. For instance, health care utilization data might help direct researchers in understanding the assumed health trade-off discussed by Mann and Holdsworth [19] wherein employees under-utilize their access to health care and in turn continue teleworking when sick to maintain their teleworking privilege. These data might also be helpful in determining whether telework is a beneficial accommodation practice for those managing chronic illness or disability when compared with other forms of cross-sectional data. For instance, do workers with a disability or CHCs use their telework accommodations in order to manage higher rates of healthcare appointments when compared to workers without these conditions? Nonetheless, as there is much further to go in the investigation between telework and worker health, broadening both our methods and measurement tools will be an integral component to also advancing our understanding about the outcomes associated with telework. --- Limitations The primary limitation of this article is that we performed a narrative review of the literature rather than an empirical meta-analysis. However, our review summarizes a broad array of factors related to antecedents and outcomes of telework that would be challenging to incorporate in a meta-analysis, especially considering the various ways in which telework has been measured across studies. Furthermore, because much of the telework literature generated before the year 2000 is included in other reviews [1,2,5], we only included earlier articles when they were most relevant. We also chose to omit discussion relating to telework and government policy, regulations, worker's compensation, case law, and organizational policy. Prior reviews and articles, such as Blount [26], Montreuil and Lippel [55], and Allen et al. [2] provide preliminary discussions of these topics. Given the rapid increase in telework prevalence since the onset of the COVID-19, pandemic, it is likely in the years to follow, there will be an increase in empirical studies investigating the impact of telework on employee and organizational outcomes. As we learn more about the effects of telework on worker health and well-being, we encourage researchers to further expand upon the conceptual framework presented in this review. --- Conclusions This article advances the occupational health and public health literature by reviewing empirical studies to explain the relationship between telework and worker health and well-being. There are a variety of components which contribute to our understanding of the benefits and consequences of telework for worker health and well-being. Individual worker and job characteristics, the social context of work, and the organization of personnel and technological systems help us to understand the variety of health outcomes presented within our review. We provided a conceptual model using both the job demands-resources and macroergonomic systems approach to illustrate the multidisciplinary components which contribute to the telework and occupational health relationship. Thus, as working dynamics change throughout the progression, and hopefully cessation, of the COVID-19 pandemic, both researchers and practitioners will need to prepare for actions to meet the changing needs of employees. We hope this review will help guide the development and implementation of federal regulations, organizational policies, and procedures to support telework practices that support worker health and well-being. We also hope this article provides a foundation and organizing framework to guide future research related to telework and worker health and well-being. ---
Telework (also referred to as telecommuting or remote work), is defined as working outside of the conventional office setting, such as within one's home or in a remote office location, often using a form of information communication technology to communicate with others (supervisors, coworkers, subordinates, customers, etc.) and to perform work tasks. Remote work increased over the last decade and tremendously in response to the COVID-19 pandemic. The purpose of this article is to review and critically evaluate the existing research about telework and worker health and well-being. In addition, we review and evaluate how engaging in this flexible form of work impacts worker health and well-being. Specifically, we performed a literature search on the empirical literature related to teleworking and worker health and well-being, and reviewed articles published after the year 2000 based on the extent to which they had been discussed in prior reviews. Next, we developed a conceptual framework based on our review of the empirical literature. Our model explains the process by which telework may affect worker health and well-being in reference to individual, work/life/family, organizational, and macro level factors. These components are explained in depth, followed by methodological and fundamental recommendations intended to guide future research, policies, and practices to maximize the benefits and minimize the harms associated with telework, and offer recommendations for future research.
INTRODUCTION Public health researchers and practitioners have historically prevented many deaths and illnesses by applying public health's fundamental problem-solving capacity to develop actions such as water quality control, immunization programs, and food inspection regimes . These successes exemplify the possibilities of addressing very serious problems through an organized effort rooted in scientific knowledge. Public health research and practice do not separate scientific discussions on the nature of problems from discussions of solutions to those problems. As described by Mercy & Hammond , a public health approach to violence prevention is action oriented, and its main goal is to analyze scientific evidence to improve injury prevention and violence reduction. The public health approach starts by defining the problem and progresses toward identifying risk factors and causes, developing and implementing interventions, and measuring the effectiveness of these interventions. Public health researchers are careful to note that these steps sometimes do not follow this linear progression; instead, some steps may occur simultaneously or problems may need to be reanalyzed and ineffective interventions readjusted . They also note that information systems used to define and analyze youth violence problems can be useful in evaluating the impacts of prevention programs. Many criminologists recognize this public health model as a specific application of the basic action research model that has grounded applied social science inquiries for many decades . The fields of criminology and public health now often overlap and intersect in their examination of the nature of serious youth violence and the development of prevention responses to address it . Tertiary prevention involves attempts to minimize the course of a problem once it is already clearly evident and causing harm. In public health terms, tertiary prevention efforts intervene after an illness has been contracted or an injury inflicted, and they seek to minimize the longterm consequences of the disease or injury . Criminologists and public health researchers have both contributed to a growing body of evaluation evidence that shows a wide range of effective tertiary treatments . This development has been important and has helped to strengthen the movement toward evidence-based violence prevention programs . Alongside it have developed some strategic innovations launched by criminal justice agencies, which have further established the emerging links between public health and criminology in youth violence prevention . Focused deterrence strategies are a recent addition to an emerging collection of evidencebased violent gun injury prevention practices available to policy makers and practitioners . Briefly, focused deterrence strategies seek to change offender behavior by understanding underlying violence-producing dynamics and conditions that sustain recurring violent gun injury problems and by implementing a blended strategy of law enforcement, community mobilization, and social service actions . In this article, we review the practice, theoretical principles, and available scientific evaluation evidence on these promising gun violence reduction strategies. --- PRACTICE Focused deterrence strategies attempt to influence the criminal behavior of individuals through the strategic application of enforcement and social service resources to facilitate desirable behaviors. These strategies are often framed as problem-oriented exercises with which specific recurring crime problems are analyzed and responses are highly customized to local conditions and operational capacities. As described by Kennedy , focused deterrence operations have tended to follow this basic framework : Select a particular crime problem, such as gun violence. Form an interagency enforcement group, typically including police, probation and parole agencies, state and federal prosecutors, and sometimes federal enforcement agencies. Conduct research, usually relying heavily on the field experience of frontline police officers, to identify key offenders-and frequently groups of offenders, such as street gangs and drug crews-and the contexts of their behavior. Frame a special enforcement operation that is directed at these offenders and groups of offenders and is designed to substantially influence that context, for example by using any and all legal tools to sanction groups, such as crack crews, whose members commit serious violence. Match these enforcement operations with parallel efforts to direct services and the moral voices of affected communities to the same offenders and groups. Communicate directly and repeatedly with offenders and groups to let them know that they are under particular scrutiny, which acts will receive special attention, when such attention has, in fact, been given to particular offenders and groups, and what they can do to avoid enforcement action. One form of this communication is the "forum," "notification," or "call-in," by which offenders are invited or directed to attend face-to-face meetings with law enforcement officials, service providers, and community figures. The Operation Ceasefire strategy, implemented by the Boston Police Department during the mid-1990s as a problem-oriented policing project, was the seminal focused deterrence intervention. Like many large cities in the United States, Boston experienced a large sudden increase in youth gun violence between the late 1980s and early 1990s . In partnership with Harvard University researchers, the Ceasefire working group of criminal justice, social service, and community-based agencies diagnosed the youth gun violence problem in Boston as one of patterned, largely vendetta-like conflicts among a small population of chronic offenders and particularly among those involved in some 61 loose, informal, mostly neighborhood-based groups. These 61 gangs consisted of between 1,100 and 1,300 members, representing less than 1% of the city's youth between the ages of 14 and 24 . Although small in number, these gangs were responsible for more than 60% of youth homicide in Boston . The Operation Ceasefire focused deterrence strategy was designed to prevent gun violence by reaching out directly to gangs, saying explicitly that violence would no longer be tolerated, and backing up that message by pulling every lever legally available when violence occurred . The chronic involvement of gang members in a wide variety of offenses made them, and the gangs they formed, vulnerable to a coordinated and comprehensive criminal justice response. Law enforcement agencies could disrupt street drug activity, focus police attention on low-level street crimes such as trespassing and public drinking, serve outstanding warrants, cultivate confidential informants for medium-and long-term investigations of gang activities, deliver strict probation and parole enforcement, seize drug proceeds and other assets, ensure stiffer plea bargains and sterner prosecutorial attention, request stronger bail terms , and bring potentially severe federal investigative and prosecutorial attention to gang-related drug and gun activity. Simultaneously, gang outreach workers, probation and parole officers, and later, churches and other community groups offered gang members services and other kinds of help . These partners also delivered an explicit message that violence was unacceptable to the community and that "street" justifications for violence were mistaken. The Ceasefire working group delivered this message in formal meetings with gang members , through individual police and probation contacts with gang members, through meetings with inmates at secure juvenile facilities in the city, and through gang outreach workers. The deterrence message was not a deal with gang members to stop violence. Rather, it was a promise to gang members that violent behavior would evoke an immediate and intense response. If gangs committed other crimes but refrained from violence, the normal workings of police, prosecutors, and the rest of the criminal justice system dealt with these matters. But if gang members hurt people, the working group concentrated its enforcement actions on their gangs. The Ceasefire working group recognized that, in order for the strategy to be successful, it was crucial to deliver a credible deterrence message to Boston gangs. Therefore, the Ceasefire law enforcement intervention directly targeted those gangs that were engaged in violent behavior rather than expending resources on those who were not. A key element of the strategy, however, was the delivery of a direct and explicit "retail deterrence" message to a relatively small target audience communicating which kind of behavior would provoke a special response and what that response would be . Beyond the particular gangs subjected to the intervention, the deterrence message was applied to a relatively small audience rather than to a general audience , and it operated by making explicit causeand-effect connections between the behavior of the target population and the behavior of the authorities. Knowledge of what happened to others in the target population was intended to prevent further acts of violence by gangs in Boston . There have been subsequent replications of the Boston "pulling levers" focused deterrence strategy, such as US Department of Justice-sponsored research and development exercises in Los Angeles, California , and Indianapolis, Indiana , which centered on preventing serious violence by gangs and criminally active groups. Consistent with the problem-oriented policing approach, the approaches taken by agencies in Los Angeles and Indianapolis were tailored to fit their cities' violence problems and operating environments. Operation Ceasefire in the Hollenbeck area of Los Angeles was framed to "increase the cost of violent behavior to gang members while increasing the benefits of nonviolent behavior" . In the wake of the federal prosecution of a very violent street gang, the Indianapolis Violence Reduction Partnership used face-to-face "lever-pulling" meetings with groups of high-risk probationers and parolees to communicate a deterrence message that gun violence would provoke an immediate and intense law enforcement response. At the meetings, targeted groups of probationers and parolees were also urged to take advantage of a range of social services and opportunities, including employment, mentoring, housing, substance abuse treatment, and vocational training . A variation of the Boston model was applied in Chicago, Illinois, as part of the US Department of Justice-sponsored Project Safe Neighborhoods initiative. Gun-and gang-involved parolees returning to selected highly dangerous Chicago neighborhoods went through "offender notification forums," where they were informed of their vulnerability as felons to federal firearms laws with stiff mandatory minimum sentences, were offered social services, and were addressed by community members and ex-offenders . The forums were designed "to stress to offenders the consequences should they choose to pick up a gun and the choices they have to make to ensure that they do not reoffend" . In addition to encouraging individual deterrence, the Chicago forums were designed explicitly to promote positive normative changes in offender behavior through an engaging communications process that offenders would be likely to perceive as procedurally just rather than simply threatening. --- Links to Public Health Perspectives on Violence Prevention Focused deterrence strategies fit well with public health perspectives on violence prevention. In general, a public health approach involves three elements: a focus on prevention, a focus on scientific methodology to identify risks and patterns, and multidisciplinary collaboration to address the issue . Ecological frameworks to analyze gun violence problems, a tool used in both criminology and public health, guide the analysis of potential interventions to design an appropriate strategy to prevent or reduce firearm violence. Multidisciplinary collaborations are necessary to address complex individual, situational, and neighborhood risk factors that lead to persistent urban gun violence problems . Complementary to these public health perspectives, academic researcher-practitioner research partnerships and interagency working groups are two core components of focused deterrence strategies that deserve further consideration here. Academic researcher-practitioner research partnerships. The activities of the research partners in focused deterrence initiatives depart from traditional research and evaluation roles usually played by academics. The integrated researcher/practitioner partnerships in the working group setting more closely resembled policy analysis exercises that blend research, policy design, action, and evaluation . Researchers have been important assets in all the projects described above, providing what is essentially real-time social science aimed at refining the working group's understanding of the problem, creating information products for both strategic and tactical use, testing-often in a very elementary but important fashion-candidate intervention ideas, and maintaining a focus on clear outcomes and performance evaluation. In addition, researchers played important roles in organizing the projects . Academic research partners in focused deterrence strategies conduct epidemiological inquiries into the nature of local gun violence problems so interventions can be appropriately customized to the underlying conditions and situations that cause violent gun injuries to recur. Indeed, public health perspectives point to the importance of identifying and understanding problems as they aggregate across individuals or groups . Doing so frequently involves analyses of the geographic and network-based concentration of gun violence that are not unlike many public health analyses of the localized transmission of disease. For example, Braga and colleagues' analysis of the persistent geographic concentration of gun violence in small "hot spot" areas has some parallels with Kerani and colleagues' study of the spatial concentration of four different sexually transmitted diseases. Sophisticated network analyses of street gangs and high-rate youth offenders suggest that most of the risk of gun violence concentrates in small networks of identifiable individuals and that the risk of homicide and nonfatal gunshot injury is associated not only with individual-level risk factors, but also with the contours of one's social network . The identification and analysis of gun violence problems with respect to area concentration and underlying networks closely correspond with the idea of the "social epidemiology" of HIV/AIDS, as described by Poundstone and colleagues . The underlying etiology of hot spots and their analogs generally points toward the need for a concerted, targeted prevention strategy . The actionoriented approach found in many focused deterrence programs is similar to the public health posture toward understanding and intervening in youth violence problems . The initial stage of the process entails identifying and tracking the problem using some kind of surveillance system. This step is followed by an effort to understand the risk factors that contribute to the problem and develop an approach to ameliorate the problem and evaluate it. Finally, the gun violence prevention strategy may be introduced to other areas that face similar problems. Convening an interagency working group with a locus of responsibility for action. Missing from the account of focused deterrence strategies reported in most law enforcement circles is the larger story of an evolving collaboration that spans the boundaries that divide criminal justice agencies from one another, criminal justice agencies from human service agencies, and criminal justice agencies from the community. As suggested by the Institute of Medicine and the National Research Council, such collaborations are necessary to legitimize, fund, equip, and operate complex strategies that are most likely to succeed in both controlling and preventing youth gun violence . In essence, the cities that implement focused deterrence strategies leveraged resources by creating a very powerful "network of capacity" to prevent youth gun violence . These networks are well positioned to launch an effective response to recurring gun violence problems because criminal justice agencies, community groups, and social service agencies have coordinated and combined their efforts in ways that could magnify their separate effects. Successfully implemented focused deterrence strategies capitalize on these existing relationships by focusing these networks on the problem of gang-related gun violence. Criminal justice agencies, unfortunately, work largely independent of each other, often at cross-purposes, often without coordination, and often in an atmosphere of distrust and dislike . This dynamic is often true of different elements operating within agencies. The capacity to deliver a meaningful violence prevention intervention within cities was created by convening an interagency working group of frontline personnel with decision-making power who could assemble a wide range of incentives and disincentives. It was also important to place on the group a locus of responsibility for reducing gun violence. Prior to the creation of the interagency working groups, no single organization in these cities was responsible for developing and implementing an overall strategy for reducing gun violence. Criminal justice agency partnerships provided a varied menu of enforcement options that could be tailored to particular gangs. Without these strategic partnerships, the available levers that could be pulled by the working group would have been limited. Social service and opportunity provision agencies were integrated into focused deterrence interventions to provide a much-needed "carrot" to balance the law enforcement "stick." The inclusion of prevention and intervention programs, such as gang outreach workers , in focused deterrence interventions was vitally important in securing community support and involvement in the program. Braga & Winship suggest that the legitimacy conferred upon the Boston Ceasefire initiative by key community members such as black clergy members was an equally important condition that facilitated the successful implementation of this innovative program. Public health research also suggests that streetworkers may help to reduce violent gun injuries by mediating ongoing conflicts among gangs . --- THEORETICAL PRINCIPLES Deterrence theory suggests that crime can be prevented when the offender perceives that the costs of committing the crime outweigh the benefits . Most discussions of the deterrence mechanism distinguish between "general" and "special" deterrence . General deterrence is the idea that the general population is dissuaded from committing crimes when it sees that punishment necessarily follows the commission of a crime. Special deterrence involves punishment administered to criminals with the intent to discourage them from committing crimes in the future. Much of the literature evaluating deterrence focuses on the effects of changing the certainty, swiftness, and severity of punishment associated with certain acts on the prevalence of those crimes . In addition to increasing certainty, swiftness, and severity of sanctions associated with gun violence, focused deterrence strategies seek to prevent violent gun injuries through the advertising of the law enforcement strategy and the personalized nature of its application. Gang-involved youth must understand the new antiviolence regimes being imposed. The effective operation of general deterrence is dependent on the communication of punishment threats to relevant audiences. As Zimring & Hawkins observe, "the deterrence threat may best be viewed as a form of advertising" . One noteworthy example of this principle is an evaluation of the 1975 Massachusetts Bartley-Fox amendment, which introduced a mandatory minimum one-year prison sentence for the illegal carrying of firearms. The high degree of publicity attendant on the amendment's passage, some of which was inaccurate, increased citizen compliance with existing legal stipulations surrounding firearm acquisition and possession, some of which were not in fact addressed by the amendment . Zimring & Hawkins further observe, "if the first task of the threatening agency is the communication of information, its second task is persuasion" . The available research suggests that deterrent effects are ultimately determined by offender perceptions of sanction risk and certainty . Durlauf & Nagin observe that "strategies that result in large and visible shifts in apprehension risk are most likely to have deterrent effects that are large enough not only to reduce crime but also apprehensions," and the authors identified focused deterrence strategies as having these characteristics . As described above, focused deterrence strategies target very specific behaviors by a relatively small number of chronic offenders who are highly vulnerable to criminal justice sanctions. The approach directly confronts gang youth and informs them that continued gun offending will not be tolerated and how the system will respond to violations of these new behavior standards. In-person meetings with gang youth are an important first step in altering their perceptions about sanction risk . As McGarrell et al. suggest, direct communications and affirmative follow-up responses are the types of new information that may cause gang members to reassess the risks of committing violent gun injuries. --- Spillover Deterrent Effects Focused deterrence strategies intended to reduce citywide levels of gang violence are explicitly designed to deter continued gun violence by gangs not directly subjected to the treatment. Kennedy et al. describe how the Boston Ceasefire working group went to considerable effort to design an intervention that would create "spillover effects" on other gangs and neighborhoods via their communication strategy. Enforcement actions, such as the arrest and prosecution of 23 members of the highly violent Intervale Posse gang, served as credible examples of the increased risks of engaging in gun violence in subsequent communication to other Boston gangs . Similarly, McGarrell et al. reported that the Indianapolis Violence Reduction Partnership working group exploited the arrest and prosecution of 16 key members of the notorious Brightwood gang to accomplish its objective of communicating a zero tolerance for violence message to other gang members in the city. In essence, these strategies attempt to establish a deterrence regime by diffusing among a very particular audience knowledge of enhanced sanction risks associated with specific violent gun behaviors. Although numerous studies previously noted its existence, the phenomenon of "spillover benefits" or effects was first introduced by Clarke . In reviewing several studies of opportunityreducing measures, Clarke reported on sizeable reductions in crimes in areas that did not receive the intervention. Later, Clarke & Weisburd offered a theoretical and descriptive elaboration on the broader concept known as "diffusion of benefits." Defined as the "unexpected reduction of crimes not directly targeted by the preventive action," and covering a wide range of possible forms , diffusion of benefits can be thought of as the complete opposite of displacement . Clarke & Weisburd proposed that two processes or mechanisms underlie diffusion: deterrence and discouragement. Using diffusion by deterrence, the potential offender is influenced by an exaggerated assessment of risk . For example, potential offenders may overestimate the "deterrent reach" of interventions and come to believe that they are "under a greater threat of apprehension than is, in fact, the case" . Using diffusion by --- Personal punishment experiences --- Vicarious punishment experiences --- Gun violence suppression --- Indirect treatment impact --- General deterrence Focused deterrence --- Selective incapacitation Special deterrence --- Direct treatment impact Figure 1 Conceptual model of the impact of focused deterrence strategies on gun violence. Source: Reference 5, p. 324. discouragement, the potential offender is influenced by a miscalculated assessment of the effort needed to commit the crime, the reward associated with the successful completion of the crime, or both. In a systematic review of 120 situational crime prevention programs that considered displacement and diffusion effects, Guerette & Bowers found that the occurrence of displacement is more the "exception rather than the rule," and diffusion is somewhat more likely to take place than displacement . Similar results have been found regarding displacement in hot spots policing interventions, although diffusion of benefits effects have been even stronger . It is important to make a conceptual distinction between general deterrence, which is targeted at the public at large, and focused deterrence, which exploits criminal networks and is targeted at the acquaintances of punished offenders who are also likely to be criminally active . As shown in the lower half of Figure 1, both are forms of general deterrence that suppress crime through the vicarious experience of punishment rather than through personal punishment experiences. By explicitly targeting other members of the offending gang or individuals affiliated with other gangs , focused deterrence interventions use the gang forum and word of mouth to project a tangible threat of punishment. The result is a type of general deterrence that is more highly circumscribed than it is usually conceived. --- Other Theoretical Perspectives Many scholars suggest that other complementary violence reduction mechanisms are at work in the focused deterrence strategies described here, which need to be highlighted and better understood . Durlauf & Nagin's article focuses on the possibilities for increasing perceived risk and deterrence by increasing police presence. However, in the focused deterrence approach, the emphasis is on not only increasing the risk of offending, but also decreasing opportunity structures for violence, deflecting offenders away from violence, increasing the collective efficacy of communities, and increasing the legitimacy of police actions. Indeed, program designers and implementers sought to generate large violent gun injury impacts from the multifaceted ways in which this approach influences gang-involved youth . Discouragement, as described above, emphasizes reducing the opportunities for crime and increasing alternative opportunity structures for offenders. In this context, situational crime prevention techniques are often implemented as part of the core pulling levers work in focused deterrence strategies . Extending guardianship, assisting natural surveillance, strengthening formal surveillance, reducing the anonymity of offenders, and utilizing place managers can greatly enhance the range and the quality of the various enforcement and regulatory levers that can be pulled on offending groups and key actors in criminal networks. The focused deterrence approach also seeks to redirect offenders away from violent crime by providing social services and opportunities. Gang members were offered job training, employment, substance abuse treatment, housing assistance, and various other services and opportunities. Finally, the focused deterrence approach takes advantage of recent theorizing regarding procedural justice and legitimacy. The effectiveness of policing is dependent on public perceptions of the legitimacy of police actions . Legitimacy is the public belief that there is a responsibility and an obligation to voluntarily accept and defer to the decisions made by authorities . Recent studies suggest that when procedural justice approaches are used by the police, citizens will not only evaluate the legitimacy of the police more highly, but also be more likely to obey the law in the future . Advocates of focused deterrence strategies argue that targeted offenders should be treated with respect and dignity , reflecting procedural justice principles. The Chicago PSN strategy, for instance, sought to increase the likelihood that the offenders would buy in and voluntarily comply with the prosocial, antiviolence norms that are advocated by communicating with offenders in ways that enhance procedural justice in their communication sessions . --- SCIENTIFIC EVIDENCE ON GUN VIOLENCE REDUCTION IMPACTS The available scientific evidence on the violence reduction value of focused deterrence strategies had been previously characterized as "promising" but "descriptive rather than evaluative" and as "limited" but "still evolving" by the US National Research Council's Committee to Review Research on Police Policy and Practices and by the Committee to Improve Research Information and Data on Firearms, respectively. A recently completed Campbell Collaboration systematic review identified ten focused deterrence evaluations ; eight of these evaluations were completed after the National Research Council reports were published. It is important to note here that none of the eligible studies used randomized controlled experimental designs to analyze the impact of focused deterrence on crime; rather, all ten eligible studies used quasi-experimental designs . Nevertheless, a better-developed base of scientific evidence thus existed to assess whether violence prevention impacts are associated with this approach. Nine of the ten evaluations of pulling levers focused deterrence strategies concluded that these programs generated significant crime control benefits . Although the authors did report a small but positive reduction in gunshot wound incidents, only the evaluation of Newark's Operation Ceasefire did not report any discernible crime prevention benefits generated by the violence reduction strategy. Evaluations of focused deterrence strategies targeting gangs and criminally active groups reported large statistically significant reductions in violent crime. These results reported the following: a 63% reduction in youth homicides in Boston , a 44% reduction in gun assault incidents in Lowell , a 42% reduction in gun homicides in Stockton , a 35% reduction in homicides of criminally active group members in Cincinnati , a 34% reduction in total homicides in Indianapolis , and noteworthy short-term reductions in violent crime in Los Angeles . Following Campbell Collaboration protocols, Braga & Weisburd used meta-analyses of program effects to determine the size and direction of the effects and to weight effect sizes based on the variance of the effect size and the study sample size . The forest plots in Figure 2 show the standardized difference in means between the treatment and control or comparison conditions with a 95% confidence interval plotted around them for all eligible studies. Points plotted to the right of 0 indicate a treatment effect; in this case, the study showed a reduction in crime or disorder. Points to the left of 0 indicate an effect where control conditions improved relative to treatment conditions. The meta-analysis of effect sizes suggests a strongly significant effect in favor of pulling levers focused deterrence strategies. The overall effect size for these studies is 0.604. This calculation is above Cohen's standard for a medium effect of 0.50 and below that of a large effect at 0.80 . Although there is evidence that quasi-experimental studies in crime and justice may exaggerate program outcomes , the overall effect size is still relatively large compared with assessments of interventions in crime and justice work more generally. In their Campbell review, Braga & Weisburd noted that the only focused deterrence intervention to investigate the existence of spillover effects on gang violence was the Los Angeles evaluation carried out by Tita et al. . The intervention targeted two rival gangs operating out of the same area . Criminal activity was substantially reduced among the two gangs over a six-month prepost period. Slightly larger reductions in these crimes were evident among four nontargeted, rival gangs in surrounding areas during the same time period. Part of the explanation for the diffusion effects may rest with fewer feuds between the targeted and nontargeted gangs. The authors also speculated that diffusion effects may have been influenced by social ties among the targeted and rival gangs. This result seemed to be especially the case for gang crimes involving guns. More recently, a rigorous quasi-experimental evaluation examined the main and spillover gun violence reduction impacts of a reconstituted Boston Ceasefire program implemented during the mid-2000s . Similar to the 1990s programs, the post-2007 version of Boston Ceasefire attempted to create spillover deterrent effects onto other gangs that were socially connected to targeted gangs through rivalries and alliances. As Ceasefire interventions were completed on targeted gangs, the working group directly communicated to their rivals and allies that they would be next if these groups decided to retaliate against treated rival gangs or continue shootings in support of treated allied gangs. These messages were delivered to members of socially connected gangs via individual meetings with gang members under probation supervision and through direct street conversations with gang members by Boston Police officers, probation officers, and gang outreach workers. Although these socially connected gangs were not directly subjected to the full Ceasefire treatment, the focused deterrence strategy was designed to reduce their gun violence behaviors via knowledge of what happened to their rivals and allies. As such, these socially connected gangs can be described as vicariously experiencing Ceasefire treatment. The main effects evaluation reported that total shootings involving directly treated Ceasefire gangs were reduced by a statistically significant 31% relative to total shootings involving matched untreated comparison gangs . The spillover effects evaluation reported that total shootings involving vicariously treated Ceasefire gangs were reduced by a statistically significant 24% relative to total shootings involving comparison gangs . In some respects, the result showing an indirect or spillover effect of the Ceasefire intervention on gun violence represents a more complete test of deterrence theory than does the result showing a direct effect because, as emphasized in Figure 1, the direct impact of Ceasefire is actually composed of two distinct effects: selective incapacitation and special deterrence. Interventions that target violent gang members for prosecution and incarceration can achieve gang-level gun violence reductions simply by taking the most dangerous and prolific offenders from the targeted gang out of circulation. They can also achieve gun violence reductions by motivating punished offenders to cease offending or, more likely, to resort to nonviolent crimes that draw less attention from law enforcement. However, these two effects are hopelessly confounded, and it appears impossible for any empirical test to untangle them. From the standpoint of gun violence suppression, of course, the distinction between selective incapacitation and special deterrence is irrelevant; that is, it matters only whether an intervention works by increasing public safety . From the standpoint of theory, on the other hand, the distinction is of paramount importance. Ceasefire is, after all, touted as a focused deterrence intervention as opposed to a selective incapacitation intervention. Consequently, because of the empirical ambiguity outlined above, any test of the deterrence efficacy of Ceasefire must, by definition, be evaluated from the spillover effects of the program. --- SUMMARY AND CONCLUSION The ultimate target of focused deterrence gun violence reduction strategies is the self-sustaining dynamic of retaliation that characterizes many ongoing gang conflicts . Focused deterrence operations are designed not to eliminate gangs or stop every aspect of gang activity, but to control and deter gang-involved gun violence. The communication of the antiviolence message, coupled with meaningful examples of the consequences that will be brought to bear on gangs that break the rules, sought to weaken or eliminate the "kill or be killed" norm as individuals recognize that their enemies will be operating under the new rules as well. The social service component of focused deterrence strategies serves as an independent good and also helps to remove excuses used by offenders to explain their offending. Social service providers present an alternative to illegal behavior by offering relevant jobs and social services. The availability of these services invalidates excuses that offenders' violent behaviors are the result of a lack of legitimate opportunities for employment, or other problems, in their neighborhood. Focused deterrence strategies are a recent addition to the existing scholarly literature on violent gun injury control and prevention strategies. The available scientific evidence suggests that these new approaches to violence prevention and control generate gun violence reductions. The positive outcomes of the existing body of evaluations indicate that additional randomized experimental evaluations, however difficult and costly, are warranted. Although the evaluation evidence needs to be strengthened and the theoretical underpinnings of the approach need refinement, jurisdictions suffering from serious gun violence problems should add focused deterrence strategies to their existing portfolio of prevention and control interventions. --- DISCLOSURE STATEMENT The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
Focused deterrence strategies are a relatively new addition to a growing portfolio of evidence-based violent gun injury prevention practices available to policy makers and practitioners. These strategies seek to change offender behavior by understanding the underlying violence-producing dynamics and conditions that sustain recurring violent gun injury problems and by implementing a blended strategy of law enforcement, community mobilization, and social service actions. Consistent with documented public health practice, the focused deterrence approach identifies underlying risk factors and causes of recurring violent gun injury problems, develops tailored responses to these underlying conditions, and measures the impact of implemented interventions. This article reviews the practice, theoretical principles, and evaluation evidence on focused deterrence strategies. Although more rigorous randomized studies are needed, the available empirical evidence suggests that these strategies generate noteworthy gun violence reduction impacts and should be part of a broader portfolio of violence prevention strategies available to policy makers and practitioners.
Background Since the beginning of the Syrian crisis in 2011, uprising and civil unrest have resulted in widespread displacement both within and outside of Syria. Population movements are linked to violence with the largest displaced populations coming from governorates where the greatest violence has occurred. Displacement is a survival strategy of endangered and deprived populations but there have also been instances of deliberate and forced displacement by both the Government and opposition forces [1] and indications that all parties are using displacement as a tool for demographic change to create geographic areas with more homogenous populations [2]. The Syrian Government has been judged as failing in its obligations to protect its population from and during forced displacement [3]. Following a dramatic increase in 2013, the number of internally displaced populations has continued to grow with many IDPs having moved multiple times because a single move has not protected them as battle lines constantly change and a breakdown of basic services spreads across the country. In addition to the massive scale of internal displacement, the flow of refugees into neighboring countries is also substantial and threatens to escalate tensions elsewhere in the region. The United Nations Refugee Agency registered more than 3.2 million Syrian refugees as of October 2014, with Lebanon, Turkey, and Jordan, respectively, hosting the largest refugee populations [4]. Indications are that conflict, displacement, and the humanitarian crisis in Syria will persist, where large areas of the country are controlled by rebel groups, and possibly escalate in the near future as the international coalition continues airstrikes against the Islamic State . The inability to accurately assess the status, size, and location of affected populations in Syria hampers humanitarian planning and provision of life saving assistance [5]. The aim of this study is to characterize internal displacement in Syria, including trends in both time and place, and to provide insights on the association between displacement and selected measures of household well-being and humanitarian needs. --- Methods This paper presents findings from two complementary methods that provide different types of evidence on displacement in Syria. The first method, a desk review of displaced population estimates and movements, provides a retrospective analysis of national trends in displacement from March 2011 through June 2014. The second method, analysis of findings from a 2014 needs assessment by displacement status, provides insight into the displaced population and the association between displacement and humanitarian needs. A more detailed description of the needs assessment methodology and general findings is presented in Doocy et al., [6]. --- Desk review The desk review sought to determine the number of IDPs by governorate monthly and characterize displacement trends over the course of the conflict. The scope was limited to publically available websites including international organizations and United Nations agencies, organizations involved in the humanitarian response, donors, and other sources including academic institutions, think tanks, advocacy groups, and news organizations. Peer reviewed journal publications were not included in the desk review because it was anticipated that few, if any, studies with primary data focusing on internal displacement in Syria would be identified; furthermore, the time delay associated with peer review publication would render any existing studies outdated for the purposes of estimating current displacement. Publications from January 2011 and after were included which includes a several month period prior to the start of the conflict in March 2011. All identified sources were evaluated as potential sources of information for estimation of IDP figures, locations and flows, and development of a situational timeframe that could inform development of IDP estimates and their progression over time. The key information sources identified with regular reporting are presented in Table 1; a total of 159 documents from eight sources were included in addition to 15 other references that were not part of routine reporting. Detailed information on IDP estimates and locations was extracted, including type of information included in the document, breakdown of information by geographic governorate, data collection and/or reporting time frame, and data source type. Governorate level monthly IDP estimates were developed by reviewing available data and assessing source quality and credibility. When multiple estimates for a given location and time were available, the estimate assessed to be the most robust was identified and an explanatory note for the decision was provided. Sources were evaluated for quality and credibility based upon the sponsoring organization , clear description of methodology, length of time between publication and estimation of estimates, and frequency of reporting. Sources were considered robust if they offered regular reporting of IDPs including source and/or methodology specification and disaggregated data available . Where there was more than one credible estimate, a range of values or average of multiple values was provided. When there were no [credible] estimates, imputed values from proximate and/or similar areas or modeled estimates were used . The final estimates of IDPs by governorate and year/ month were presented with a central, or mid-range, estimate, as well as with low-range and high-range estimates, consistent with standard demographic estimation practice. --- Needs assessment Between April and June 2014, International Orthodox Christian Charities , an International nongovernmental organization , and the Greek Orthodox Patriarchate of Antioch and All the East conducted a needs assessment of 3869 Syrian households affected by the crisis with the objective of gaining a better understanding humanitarian needs and assistance priorities. Given that no recent and accurate nationwide estimates of the displaced population or the population in need of assistance were available, planning a representative sample was exceptionally difficult; furthermore, security and access issues limited the ability to attain the desired geographic coverage. The assessment included 36 neighborhoods in 19 districts in nine governorates . Included neighborhoods met the following criteria: 1) no recent needs assessment from other organizations was available; and 2) large numbers of displaced or otherwise conflict-affected families perceived as vulnerable, poor or underserved with humanitarian aid were present [per the judgment of IOCC/ GOPA staff implementing assistance programs]. Neighborhoods were excluded if they met any of the following criteria: 1) the assessment could present a security threat to interviewers or respondents; 2) significant humanitarian assistance was being received; or 3) the neighborhood was perceived as affluent [per the judgement of IOCC/GOPA staff; there were no explicit criteria used, assessment was subjective]. The assessment was intended to sample different types of households in the community; in each location, the planned sample was 30 households Religion was not a consideration in community or household selection. Eligible households included those that were displaced; host families of those displaced; returnees; and those otherwise directly affected by the conflict . For areas with large numbers of families registered with IOCC/GOPA, including those receiving and not receiving humanitarian assistance, a list-based sampling approach was used where households were randomly selected using interval sampling. For areas with few or no registered households, local community leaders were asked to refer the survey team to underserved families; where possible, multiple sources of referral were sought in each community. Lists were combined and cross-checked to prioritize families listed multiple times; interval sampling was then used to identify the remaining sample of households from the list. Data was collected using a structured multi-sectoral paper questionnaire developed by IOCC/GOPA based on information needs to inform humanitarian assistance programming. The questionnaire was adapted from Sphere assessments, piloted in Syria, then revised based --- Results --- Source appraisal A comprehensive review of information on displacement in Syria identified the Assessment Capacities Project Regional Analysis for Syria and its predecessor, the Disaster Needs Analysis, as the most in- documents , figures reported by ACAPS were determined to be the most consistently reliable. Overall, given the high levels of conflict and insecurity, the limited humanitarian access, and the highly dynamic rates and volume of internal displacement, it is not surprising that IDP estimates are somewhat imprecise and do not present a highly nuanced picture at the aggregate level. That said, the ACAPS analysis, both in its Disaster Needs Analysis and the later SNAP, provides a consistent source of robust analysis of the various primary sources of data on internal and external displacement over several years. A number of additional sources were identified that, while not intended to provide regular updates on displacement, offer a more comprehensive and nuanced portrait of the context of IDP estimation and internal displacement itself throughout the country. The most robust of these sources were two thematic reports released after the desk review's June 2014 cutoff; one a compilation of Syrian governorate profiles compiled by OCHA and the other an analysis of internal displacement performed by the Internal Displacement Monitoring Center [12,13]. --- Estimates and trends in internal displacement Estimates of conflict-affected IDPs are based primarily on ACAPS analysis, which in turn derives from collation and analysis of various primary sources of data on internal displacement, including OCHA Humanitarian Bulletins, the MoLA, SARC, and J-RANS. There are periods of several months-including July-November 2012, May-August 2013, October-December 2013, and January-October 2014. Figure 2 presents trends in the total number of IDPs in Syria over time with high and low monthly IDP estimates largely from ACAPS analysis, both for the sake of consistency and also because ACAPS provides the most robust and transparent approach of the available sources. By October 2013, ACAPS was reporting only a single estimate of 6.5 million IDPs, without ranges or other sub-breakdowns. Following the release of the Syria Integrated Needs Assessment in December 2013, the February 2014 ACAPS Regional Analysis Syria reported an updated estimate of 7.6 million IDPs. ACAPS sites this figure as the highest and, by their estimation, the most accurate between OCHA and SINA IDP estimates [14]. In October 2014 however, ACAPS returned to the previous estimate of 6.5 million IDPs citing only the estimate provided in OCHA's August 2014 governorate profiles report [12]. No updated displacement estimates have been reported since August 2014. Maps of IDP populations are also presented at the governorate level to provide insight into geographic patterns of displacement and change over time. Displacement, using national estimates available for different time periods is shown in Fig. 3 to illustrate geographic shifts in the displaced population over the course of the conflict. Displacement is presented as both the absolute numbers of IDPs within each governorate and also expressed as a proportion of the 2011 pre-conflict population in each governorate [19]. The most recently reported absolute number of IDPs by governorate is presented in Table 2. Reported displacement in the Syrian conflict remained at several hundred thousand for the first few months of the conflict. A large increase in displacement was first reported in July 2012 when the total number of IDPs was estimated at 1.35 million. Displacement increased again in the later half of 2012 and was estimated at 1.6 million in November 2012. At this time, the largest displaced populations in terms of absolute numbers were in Aleppo, Homs, and Rif Damascus, each of which had an IDP population of between 200,000 and 400,000. In terms of the relative burden of IDPs, these same three governorates also had the greatest proportions displaced at 7 % in Aleppo, 14 % in Homs, and 16 % in Rif Damascus. In total, six governorates had displaced populations exceeding 100,000; IDP populations in Al-Hasakah, Ar-Raqqa, and Idleb were reported as between 100,000 and 200,000. By February 2013, national IDP estimates increased considerably to 2.4 million and IDP populations exceeded 100,000 in 9 of the 14 governorates; Idleb and Homs had the largest displaced populations, both of which exceed 500,000. With respect to the burden of IDPs relative to the pre-conflict population, the governorates with the largest proportion of displaced included Idleb , ). In terms of relative burden, described as the proportion of the pre-conflict population that are IDPs, the governorates with the highest burden of displacement were Tartous and Rif Damascus ; Dara'a was least affected in terms of both the absolute and proportionate size of the displaced population . Assessing displaced populations as a proportion of pre-conflict population rather than a crude number may provide a more nuanced understanding of the demographic context in which displacement is occurring. Although Aleppo reported the highest crude number of IDPs in October 2014 , when the displaced population is expressed as percentage of the 2011 pre-conflict population, it has a comparatively low burden of displacement at 30 %. In contrast, Tartous, with a displaced population accounting for 47 % of the pre-conflict population also has an exceptionally high burden of displacement despite the smaller absolute size of the displaced population . --- Needs assessment A majority of households included in the needs assessment was displaced. Displaced households were categorized into to two groups: those displaced outside their governorate and those displaced within their governorate ; these proportions varied substantially by governorate. Differences in adjusted and unadjusted figures were observed for the variable summarizing displacement from outside or within the governorate; the difference in unadjusted and adjusted proportions is due to the large weight given to Aleppo. Areas with high levels of conflict such as Aleppo, Dara'a, and Homs tended to have larger numbers of households displaced from within the province . In contrast, IDPs in As-Sweida and Tartous, governorates that have seen lower levels of conflict, were more likely to be displaced from outside the province suggesting that, as anticipated, populations are moving to areas perceived to be more secure. The vast majority of displaced households, 72.3 % had been displaced for more than a year; 13.7 % had been displaced for between six months and a year and 13.9 % for less than six months. The highest proportions of newly displaced households, defined as having been displaced within 3 months preceding the survey, were in Latakia and As-Sweida . In contrast, the lowest proportions of newly displaced households were found in Aleppo -) and Damascus -, presumably because intense fighting in these areas is forcing households to move elsewhere. Nearly half of displaced households reported being displaced once; a sizeable proportion were displaced twice and 25.5 % were displaced three or more times. The number of times a household was displaced varied by governorate, with households in the highly conflict affected governorates of Dara'a and Aleppo reporting being displaced more times . Statistically significant differences were also observed between populations displaced within their governorate as compared to those from other governorates frequency of displacement. Households displaced within their governorate were significantly more likely to have moved multiple times as compared to those displaced from other governorates . Nearly half of households displaced from outside their governorate were displaced one time compared to 38.1 % of those displaced within their governorate. Only 5.9 % of households displaced from outside their governorate were displaced three or more times compared to 18.6 % of those displaced within their governorate. Differences in selected sector-specific indicators were analyzed by displacement status and are presented in Table 3. No significant differences were observed between displaced and non-displaced with respect to living conditions with the exception of crowding where displaced households were significantly less likely to have ≥3 people per sleeping room . Households that were displaced as compared to non-displaced were significantly more likely to be food insecure vs. 4.8 % , p < 0.001); to have household members requiring follow up or specialized medical care . Priority unmet needs as perceived by respondents are summarized by sector and displacement status in Fig. 5. A household reporting any specific need within the sector as one of their top five priorities for aid was classified as having an unmet need within that sector. With the exception of education, no significant differences were observed in priority unmet needs between displaced and nondisplaced populations. Education was prioritized as unmet need by 29.9 % of non-displaced households as compared to 22.5 % of displaced households despite lower enrollment rates among displaced households. --- Discussion The importance of incorporating current and accurate data into humanitarian assistance planning is evident; however, the challenges of effectively enumerating displaced populations often impede such efforts [20]. Attempts to enumerate or estimate IDPs may be clouded by political interests, fundraising, and intra-organizational relationships and often lack continuity and consistency [21]. 2011, the peaceful anti-government protests that spread across the nation over several weeks were met with violence from the governing regime. From March 2011 to March 2012, internal displacement was viewed as "temporary and sparse," characterized by people fleeing conflict hot spots, moving temporarily to surrounding areas or nearby cities, then returning home after protests and violence subsided [22,23]. By July 2012, the International Committee of the Red Cross and Red Crescent Societies declared that the threshold for an armed civil conflict had been met and displacement, both internal and external, escalated to a new level [24,25]. The wave of displacement beginning in the second half of 2012 was characterized by the introduction of makeshift IDP camps, first along the Turkish border and later spreading across the country into southern governorates [13]. The scale of displacement continued to increase rapidly in the first half of 2013 with estimates of the displaced population exceeding 4 million by May 2013 [18]. In August 2013, as reports circulated of the use of chemical weapons in the suburbs of Damascus, the crisis entered an even more intensive phase. By September 2013 the Syrian refugee population exceeded 2 million and IDP estimates, while hampered by lack of access due to the deteriorating security situation in many areas, climbed to 6.5 million, with the majority of new displacement occurring in Homs, Idleb, Aleppo, and the northeastern parts of Syria [17]. Mass population movement in northern Syria was seen in late 2013 and early 2014 following continuous aerial bombardment, most notably in eastern Aleppo. In-fighting among opposition forces escalated in 2014 and, though centered in Al-Hasakah and Aleppo, clashes expanded into other governorates causing many that were already displaced to again flee areas previously considered to be safe [14,18]. Evidence from the needs assessment indicates the displaced population is not highly mobile, with most households reporting being displaced only one or two times . Areas with high levels of conflict such as Aleppo, Dara'a, and Homs tended to have larger numbers of households displaced from within the governorate. In contrast, IDPs in As-Sweida and Tartous, governorates that have seen lower levels of conflict, were more likely to be displaced from outside the governorate suggesting that, as anticipated, populations are moving to areas perceived to be more secure. The highest proportions of newly displaced households, defined as having been displaced within 3 months preceding the assessment, were in Latakia and As-Sweida . In contrast, the lowest proportions of newly displaced households were found in Aleppo , presumably because intense fighting in these areas is forcing households to move elsewhere. Unmet needs were relatively similar between displaced and non-displaced households; however, the 'non-displaced' population included in the needs assessment was highly selective and represents an especially vulnerable sub-group of the non-displaced population. Recent conflicts in the region such as those in Iraq and the Gaza Strip provide insight into the toll of sanctions and protracted conflict of this level on civilians [26,27]. While humanitarian assistance funding is scarce throughout the region, this is only part of the problem. The impact of sanctions, damaged infrastructure, and volatile security concerns on basic service provision for those in Syria is immense, often preventing civilians from accessing assistance, even when supplies are available [27][28][29]. Attention to long-term planning and reform of existing aid delivery systems, including strengthening local capacity and infrastructure wherever possible, are essential for meeting the needs of IDPs in Syria in the years to come and in the longer-term, for the transition to postconflict reconstruction [30]. --- Limitations Restricted access by the international community to those inside Syria makes accurate estimation of IDP populations a challenge. Primary IDP population estimates identified in this review draw from formal registration systems established by NGOs, UN agencies, and various organizations providing humanitarian assistance in the country. However, Syria's division into areas run by the government, those led by the various armed groups, and those areas still contested, makes countrywide monitoring of displacement difficult. Consequently, few primary sources of displacement data are available. As such, the key limitation of the desk review is reliance on one secondary source and few primary sources of data. Findings from the needs assessment are a strong indication of the widespread unmet needs in Syria among both displaced and non-displaced population. However, they likely under represent the severity of the crisis and the extent of actual humanitarian needs. The limited number interviews conducted in certain highly affected areas, such as Aleppo and Homs, and the inability to access areas not under government control and close to the fighting lines, which are likely to have less access to humanitarian assistance and other basic services is an important limitation of the needs assessment. --- Conclusions Displacement often corresponds to conflict levels; however, the direction of this relationship is not uniformly supported by governorate-level analysis. A number of governorates reporting a high proportion of the displaced population also have higher levels of conflict mortality and ongoing violence. It is important to note that differences in displacement within governorates may be related to the specific location of conflict and displacement. For example, while displacement and conflict are reportedly high in Homs, the majority of IDPs may be living in more remote areas of the governorate while violence is focused in the city of; as such, the relationship observed between displacement and conflict at the governorate level may not be mirrored on a smaller scale between cities. Governorate level IDP estimates are also influenced by limited access to certain areas, unsubstantiated estimates, and substantial discrepancies in reporting between multiple sources. Figures from some governorates, notably Latakia and Tartous, support the hypothesis that lower levels of conflict are associated with a greater numbers of IDPs as populations are pulled toward safer areas. Conversely, Quneitra and As-Sweida, which also have lower levels of violence and conflict mortality, do not share this burden and host small IDP populations, both in terms of preconflict population proportion and absolute numbers. As the conflict in Syria continues in its fourth year, the frontlines have expanded and security throughout the country, while ever-changing, continues to decline. Violence has expanded in areas previously considered to be safe, leading many families already displaced to relocate multiple times. Secondary displacement is not uniformly reported across sources and it is not always clear how individual IDP figures account for multiple displacements in their estimates. Additional details about displacement, including whether displaced individuals originated within the current governorate or outside of the governorate would further assist in understanding migration trends and humanitarian assistance planning. While levels of unmet need are high in both displaced and non-displaced populations, the scale of conflict affected population and the capacity to provide humanitarian assistance necessitates targeting strategies that includes both displacement and other vulnerability criteria to ensure that the needs of both displaced and non-displaced populations are met. Programming strategies specific to the length of displacement are essential, where newly displaced populations are likely to have very different needs than those displaced in the their current location for an extended period, and will likely vary by location and over time as the conflict evolves. --- Abbreviations ACAPS: Assessment capacities project; ECHO: European commission humanitarian aid office; IDMC: Internal displacement monitoring centre; IDP: Internally displaced population; ICRC: International committee of the red cross; IS: Islamic State; J-RANS: Joint rapid assessment of Syria; MoLa: Syrian ministry of local administration; OCHA: Organization for the coordination of humanitarian affairs; SARC: Syrian Arab Red Crescent; SINA: Syria integrated needs assessment; SNAP: Syria needs analysis project; UN: United Nations; UNHCR: The United Nations refugee agency; UNICEF: The United Nations children's fund. --- Competing interests The authors declare that they have no competing interests. Authors' contributions SD and EL led the preparation of the manuscript; SD and WCR designed the desk review; EL conducted the desk review; TD led the data analysis; the IOCC/ GOPA Team led survey implementation and data collection; WCR participated in critical review. All authors read and approved the final manuscript.
Background: Since the start of the Syrian crisis in 2011, civil unrest and armed conflict in the country have resulted in a rapidly increasing number of people displaced both within and outside of Syria. Those displaced face immense challenges in meeting their basic needs. This study sought to characterize internal displacement in Syria, including trends in both time and place, and to provide insights on the association between displacement and selected measures of household well-being and humanitarian needs. Methods: This study presents findings from two complementary methods: a desk review of displaced population estimates and movements and a needs assessment of 3930 Syrian households affected by the crisis. The first method, a desk review of displaced population estimates and movements, provides a retrospective analysis of national trends in displacement from March 2011 through June 2014. The second method, analysis of findings from a 2014 needs assessment by displacement status, provides insight into the displaced population and the association between displacement and humanitarian needs. Results: Findings indicate that while displacement often corresponds to conflict levels, such trends were not uniformly observed in governorate-level analysis. Governorate level IDP estimates do not provide information on a scale detailed enough to adequately plan humanitarian assistance. Furthermore, such estimates are often influenced by obstructed access to certain areas, unsubstantiated reports, and substantial discrepancies in reporting. Secondary displacement is not consistently reported across sources nor are additional details about displacement, including whether displaced individuals originated within the current governorate or outside of the governorate. More than half (56.4 %) of households reported being displaced more than once, with a majority displaced for more than one year (73.3 %). Some differences between displaced and non-displaced population were observed in residence crowding, food consumption, health access, and education. Conclusions: Differences in reported living conditions and key health, nutrition, and education indicators between displaced and non-displaced populations indicate a need to better understand migration trends in order to inform planning and provision of live saving humanitarian assistance.
Background: More in-depth evidence about the complex relationships between risk and protective factors and mental health among adolescents has been warranted. Thus, the aim of the study was to examine the direct and indirect effects of experiencing pressure, bullying, and low social support on depressive symptoms and self-directed violence in adolescence. --- Methods: A cross-sectional study was conducted in 2022 among 15 823 Norwegian adolescents, aged 13-19 years. Structural Equation Modelling was used to assess the relationships between socioeconomic status, pressure, bullying, social support, depressive symptoms, self-harm and suicide thoughts. --- Results: Poor family economy and low parental education predicted high pressure, bullying, low parental support and depressive symptoms in males and females. Further, high pressure predicted depressive symptoms among males and females, and self-harm and suicide thoughts among females, but not males. Bullying predicted depressive symptoms, self-harm, and suicide thoughts among males and females. Low parental support predicted bullying, depressive symptoms, self-harm and suicide thoughts among males and females, and high pressure among females, but not males. Low teacher support predicted high pressure, depressive symptoms, whereas low friends support predicted bullying, depressive symptoms and suicide thoughts among males and females, and self-harm among males, but not females. Results also showed that depressive symptoms predicted self-harm and suicide thoughts among males and females. Finally, pressure, bullying and depressive symptoms were the main mediators by which family economy, parental education, friends support, teacher support and parental support predicted self-harm and suicide thoughts. --- Conclusions: Low socioeconomic status, pressure, bullying and low social support were important predictors of depressive symptoms and self-directed violence among Norwegian adolescents through direct and indirect mechanisms. --- Key messages: These results also provide increased knowledge about how multiple risk and protective factors across domains impact both depressive symptoms, self-harm and suicide thoughts among adolescents. This study highlights the importance of policies aiming at reducing economic and social inequalities, as they may also improve youth mental health. --- Abstract citation --- Background: A pressing public health issue exacerbated by the COVID-19 pandemic is domestic violence. The lockdowns harshened its major risk factors, such as financial strain, substance use, low social support, and also confined victims with their abusers. Cross-sectional studies confirm most of these associations, but longitudinal evidence is lacking. We investigate the association of domestic violence victimisation with these factors across the stages of the pandemic internationally and the characteristics linked with first vicitmisation in the pandemic. --- Methods: We used four waves of an online survey of the adult general population of 14 countries in five continents in May 2020-May 2022 (N = 6051
Health facilities are important contact points for victims of violence against women and girls (VAWG) in need of care. In Albania relevant data about health sector utilization remain scarce. This is a first attempt aiming to quantify missed opportunities for survivors of VAWG at the health system. We used a sample of 151 victims of VAWG, residents of Tirana Capital, retrieved from the dataset of Counseling Line for Women and Girls, a non-governmental agency. All cases were victims of physical violence during 2018-2022. For each case in the sample, health records for the same period were traced in Tirana hospitals and primary health care (PHC) facilities medical registries. Hospital records were traced for all 151 cases, while PHC records were completed only for the cases with a national ID. No case of VAWG was formally registered or reported by health services in Tirana in the 5 years prior to survey. Nonetheless, 51.5% of survivors had used PHC services and 20.4% had used hospital services during the same period. 8.6% had been at the emergency service of Trauma Hospital with acute symptoms of physical body harm and 12.5% had received obstetric-gynecologic care. The average number of visits at PHC for the women using the services was 3.3, ranging from 1 to 15 visits during 5 years. The most common reasons for the visits at PHC were chronic conditions (28.7%), followed by infection diseases (22.%) and mental illness (16.5%). Health sector opportunities to provide specific treatment and support to victims of VAWG are missed at multiple levels in Albania. While piloting integrated health and social care at selected PHC facilities in the country provide a good momentum for responding to some of the unmet needs, a more comprehensive agenda for the health sector should address physician's failure to identify and deal with VAWG as a health problem.Health sector opportunities to provide specific treatment and support to victims of violence against women and girls are missed at multiple levels in Albania. Interventions in the health sector should address physician's failure to identify and deal with violence against women and girls as a health problem.
Introduction The world has experienced enormous health improvement in the last century, particularly in its later half . Despite the overall improvement, however, we also have to acknowledge that developing countries benefited unequally from the above health gains, with many countries continue to have high mortality rate, where in some parts of the world the burden of ill health in the form of infectious and parasitic diseases are still prevalent. Communicable disease is an avoidable disease and avoidable mortality, but due to unequal access to healthcare and preventive remedies within a country can lead to notable number of death as a result of lack of access to effective treatment [1]. Developing countries particularly those in the middle range of GNP are currently facing a double burden of malnutrition at both extreme end of the same continuum, undernutrition and obesity [2]. Both undernutrition and obesity have wide ranging health consequences in all age groups. Figure 1 show a few selected developing countries with the double burden of malnutrition. As shown in Figure 1, many countries in Central and Latin America are showing prevalence of overweight above 30% of their population, particularly in Colombia, Chile, Peru, Brazil, Costa Rica, and Cuba. The graph also depicts an increase trend between underweight and overweight in most countries in Latin America and Africa. This problem is not only confining to Latin America or Africa, but is also a common trend in Southeast Asia. Despite gloomy conditions in terms of global health, the world will at the same time see rapid growth of cities and income in the near future. In 1900 only 10% of the world's population lived in cities, however, today the proportion has increased to nearly 50% [3]. According to the United Nations estimates, almost all of the world's population growth between 2000 to 2030 will be concentrated in urban areas of developing countries, where, if the present trend continues, it is expected that 60% of the developing countries will be urban by 2030. At the same time it is projected that income per person in developing countries will grow at an annual rate of 3.4% between 2010-2015, which is twice that, was registered in the 1990's . Obesity is defined as excess body fat [4]. On the other hand overweight means the body weight is above ideal weight or standard weight for height. A person may be overweight but not necessarily overfat, this is common among athletes or football players [5]. However, normally a person who is grossly overweight will most likely be overfat. The World Health Organization defined obesity as those people with the body mass index of equal of greater than 30, and overweight as those whose BMI are between 25.0 to 29.9 [6]. At the physiological level obesity can be referred to as a condition of abnormal or excessive fat accumulation in adipose tissue to the extent that health may be impaired [7]. The normal scientific explanation for obesity has been the imbalance between energy intake and energy expenditure. When input is greater than expenditure, excess fat will accumulate. However, understanding the physiological basis alone is not adequate, as it can be seen today that obesity has become a pandemic, there is a trend towards global obesity or globosity [8]. In western countries the prevalence of obesity is beyond control despite the knowledge and research they have accumulated [9,10]. Being obese is associated with increased blood pressure, elevated total cholesterol, abnormal lipoprotein ratios, hyperinsulinemia, and type 2 diabetes [11]. The most prevalent and immediate consequence from obesity, however, may be its negative impact on quality of life [12]. Unfavorable psychological factors, lower self-ratings of health, and worse health-related behavior can be found in overweight and obese individuals. Obese individuals are more likely to be dissatisfied with their body shape and size [13,14]. Weight stigma increases vulnerability to depression, low self-esteem, poor body image, maladaptive eating behaviors and exercise avoidance [15]. Thinness is a beauty ideal in both Europe and the US, so being overweight or obese may contribute to body dissatisfaction and low self-esteem that increases the risk of depression [16]. Some obese people report social anxiety, whereby they are embarrassed to go out because they may not 'fit' into a chair in a restaurant or an airplane, for example. Being obese reduces their self-esteem and the effect on their social life leaves them isolated and vulnerable [17]. This study attempted to assess the self-perception and the quality of life among housewives in rural households in the State of Kelantan, Malaysia, and at the same time solicits people's perception about obesity based on their cultural and socioeconomic context. --- Methods Population Sample-Respondents of this study were selected by cluster sampling from a list of rural villages within a sub-district that were selected by random sampling from 8 subdistricts in the District of Bachok in the State of Kelantan, Malaysia. Included in the study were female housewives aged 20 years and over, with body mass index above 25. Other inclusion criteria were being healthy and not suffering from any serious diseases, Non-pregnant and giving written consent to be interviewed and taken body measurements. Excluded were those with ages below 20, body mass index below 25 or suffering from serious illnesses or psychiatric problems. Were also excluded pregnant women and those who did not consent to participate in the study. The study was approved by The Research Ethical Committee of Universiti Sains Malaysia ]). The purpose and nature, of the study were explained to all participants, who gave their written informed consent before participation. The study was done in full accordance with the ethical provisions of the World Medical Association Declaration of Helsinki . Sample Size -The sample size for this study was 421 housewives: The primary data was collected using a questionnaire, interview and focus groups methods, where the researcher conducted a field survey among selected groups of respondents in different communities. The questionnaire focuses on eating habits, body image, quality of life and socio-demography. The focus group discussion touched on globalization of food consumption, lifestyles and socio-cultural perception of obesity. Quality of Life Assessment -An assessment of quality of life among overweight and obese respondents used the ORWELL97. This questionnaire has been translated into Bahasa Malaysia. Data Analysis -Data entry and analysis was performed using the SPSS for Windows software. The analysis consisted of descriptive and inferential findings to describe the underlying factors and predicting variables in modifying body weight among rural housewives in Malaysia. The result also discussed the quality of life of respondents in relation to overweight and obesity. --- Results A total of 421 respondents who were all female housewives from 8 sub-districts in the district of Bachok participated in the study . The age of respondents were mostly within the range of 20-59 years old with the majority from the 40-59 age group , with the mean age of 45.01 + 9.01 . In terms of marital status, 86.9% were married and the rest were either widows or divorce . Household size and number of children are also shown in Tables 1, with a mean of 6.00 + 2.48 and 5.3 + 3.0 peoples, respectively. More than 64% of the respondents had secondary education, while less than 10% did have any form of formal education . As housewives most respondents did not have personal income , while in terms of household income the majority were in the income bracket of below RM1000 per month . About 75% of the respondents spent less than RM 500 per month on food for the household, the mean monthly expenses on food was RM 400.62 . The respondents were asked regarding their selfperception of health and physical activities, the findings are shown in Table 3, where 66.7% considered themselves as very healthy or healthy. Almost all of respondents planned to lose weight . The respondents were also asked regarding their priority in life, Table 3 also listed the ranking of priority by respondents. The number one priority in Table 3 is to be physically healthy , followed by having a happy family , self-happiness, being wealthy, emotionally healthy, modest living, sanity, and earned higher education. The respondents' current spouse/partners, expectations and preferred sexual partners in relation to body weight are all shown in Tables 3. More than 66% has spouse or partners who are normal weight and only 18% has obese partner . More than 70% of respondents expected their current partners to maintain their current weight and about 20% expected them to lose weight . Regarding sexual partners, more than 95% preferred sexual partners who are of normal weight . Tables 4 reported the respondents' responses on what do an obese and thin person represent. More than 55% said that obesity symbolizes happiness, 19.4% said it reflects sickness, 16.1% thought it was laziness and 5.5% said it was a result of lack of control in food consumption, respectively . Regarding thinness, 42.2% thought these people were not happy, 22.7% said it was due to fear of eating, 19.8% thought they may be sick and 9.6% said it reflects a weak person . The perception in defining what a beautiful female person is presented in Table 4. Most respondents rated behavior and personality as the most important indicator, followed by facial beauty and the shape of the body . In defining a handsome male, behavior and personality also was rated highest , followed by body shape and facial attractiveness . Table 4 also represents the perception of respondents with respect to a beautiful body or shape. For female, thin or slenderness was considered as the most important attribute , followed by height . While for males, a beautiful body can be defined as being tall , followed by thin and being muscular . On body self-perception, 90.5% are not satisfied with their current body shape , the main reason why they are not satisfied is because they perceived they are obese or overweight. A self-reported measure of obesity -related quality of life questionnaire was administered to the respondents to assess whether their weight affect their quality of life [18]. ORWELL 97 consisted of an 18 item questions and for each item the respondent scored on a 4-point Likert scale the occurrence and severity of the symptom and the subjective relevance of the symptom-related impairment in the respondent's own life . The score of the item is calculated as the product of occurrence and relevance. The total ORWELL 97 score is obtained as the sum of the scores of individual items. Higher ORWELL 97 scores mean a lower quality of life. The results of ORWELL 97 scores for the entire data are shown in Table 5, with the mean total score of 47.7 ± 35.2. The mean ORWELL 97-O is 25.3 ± 16.3, and the mean ORWELL 97-R is 22.4 ± 18.9. --- Discussion Understanding community views and perceptions in regards to health and obesity is essential to design and achieve successful health promotion strategies. The actions people take to maintain their health depend on how they perceive the threat of the disease. In other words, when people perceive that they are susceptible to a disease and are likely to suffer serious consequences from it, then they tend to take action to prevent it. This study aimed to explore community perception of obesity and obesity related quality of life among overweight and obese housewives in rural areas in Bachok District, Kelantan, Malaysia. The results of the survey show a common trend regarding the perception of people in relation to health, dietary practices and obesity. Even though more than 66 percent of the respondents perceived themselves as healthy or very healthy, 96.2% said they plan to lose weight, which means that although they are overweight still some of them considered themselves as healthy. This result was unexpected as overweight and obese respondents are more likely to report poorer health in comparison to those with normal weight [19], given that studies have demonstrated that there is no healthy pattern of increased weight [20]. The high percentage of obese and overweight rural housewives in Bachok on higher self-reported health status could be explained by their low socioeconomic status. Indeed, a negative association between high education and poor self-reported health was found in a recent study implying women in St. Petersburg, Estonia and Finland [19]. In St. Petersburg unlike the other two areas, housewives rather than employed women had less often poor perceived health. Housewives in Bachok had low socioeconomic status, and most of them had personal and household income below the current minimum basic wages of RM900 in Peninsular Malaysia, as well as education level below higher education. A quarter of the respondents had spouses who were overweight or obese. Thus considering the respondent's population are already a group of overweight people, about two third of them have spouses who have normal weight. The results of body self-perception was expected, because the respondents that we selected were mostly overweight or obese . It is interesting to also note that even though a whopping 90.5% of the women were not satisfied with their body shape, a high percentage of respondents perceived that obesity symbolizes as being happy, which seemingly reflect that it's alright to be obese and only happy people have good appetite. Likewise, thinness symbolized people who are not happy and those who feared or resisted eating, thus avoid eating or lacking in appetite. They are also being perceived as sick and weak. Happiness here is perceived as an obesogenic factor as it is tied to comfort eating and weight gain. This finding corroborates recent study [21] which reported that happier people are more likely to overeat compared to unhappy individuals. On the other hand, a substantial proportion perceived obese people as those who are sick and lazy, people can be sick as a result of imbalances in body metabolism or an indulgence in consumption of food. Lack self-control is also seen as one the characteristics of obese people, lack of control here can mean inability to resist food and eating temptation or people who lack overall self-discipline. In terms of placing their priority in life, the greatest proportion chose physical health as the number one priority, having a happy family is the second priority. The third priority is self-happiness or self-contented, and the fourth placing is being rich. This results show the close relationship between being healthy and having a happy family, including personal happiness. The results of the perception of beauty show how important is the character or behavior of a person in society, and it has a very powerful influence in determining the acceptability by the society at large. This may be unique to Malaysia where a person's worth is in his/or her behavior, you are evaluated on how you conduct yourself within a certain norms that is expected in your society. This is also not surprising because the housewives are from one of the most culturally conservative and prudish States in Malaysia where attractiveness and person's worth are socially based on character rather than body shape and facial look as in the western societies. Nevertheless, when it comes to their perceptions of the ideal body size, respondent preferences were highest for the thin figure. This paradox could be linked to the nutritional and cultural transition that is accompanying the globalization and rapid growth of the Malaysian economy with the concomitant acculturation into western societies. Thinness is indisputably a striving for the beauty ideal in modern western societies because of the socially constructed idea that physically attractiveness is one of the women's most important assets. This study suggests that the values associated with self perception on health, thinness and obesity could be influenced by socio-cultural conditions. The evaluation of the relationship between obesity and quality of life is not always a direct relationship because of the various domains or components of the quality of life measures. For this study the obesity related wellbeing was used as an instrument for the assessment of the quality of life of respondents [12,18]. Past studies have reported that obese individuals had a poorer physical quality of life than normal individuals [22,23], this condition is also related to the impaired physical well-being among obese individuals. Thus the impact of weight on physical and psychological well-being is a very important area that need further research. The results of the ORWELL 97 of the total score are comparable to the mean total score of the population studied by Mannucci in Italy , which is 47.9. However, the scores for both ORWELL 97 -O and ORWELL97 -R, were lower than the Italian population. According to the interpretation of ORWELL scores a lower scores mean a better quality of life. This results also differed from the total ORWELL 97 findings from Indonesia , Philippines , and Thailand [24], which may mean that overweight and obese respondents in Bachok have a better quality of life than their counterparts in Thailand, Philippines, and Indonesia. --- Conclusion This study surveyed the perception of rural housewives population regarding health, obesity and impact of weight on quality of life. The results indicated that perception on obesity did not differed very much between respondents, in fact there existed a lot of similarities in their perception about health, quality of life, personal health and selfsatisfaction with own body. However, their quality of life was within the normal or moderate level based on the ORWELL 97 assessment. Even though most of the respondents were aware of their body weight and indicated an intention to lose weight they also reported themselves as healthy or very healthy, suggesting that public health messages intended for rural housewives need to be tailored to health-related consequences of fatness. This study is a preliminary study, and the results of the study is very encouraging, it challenged the researchers to go into more in depth to untangle the link between nutrition and socio-cultural behaviors and health consequences, particularly obesity. It is hoped that further research can be carried out to provide a more comprehensive findings regarding the factors and variables that are at play in accelerating or slowing down dietary consumption and physical activities. --- Competing interests The authors declare no competing financial, professional or personal interests that might have influenced the performance or presentation of the work described in this manuscript. ---
Introduction: Obesity, in the past was perceived to be the problem of the rich, but recent studies have reported that the problem of obesity is a worldwide problem and rural population is no less affected. Self-perceived health and weight appropriateness is an important component of weight-loss and eating behaviors and may be mediated by local, social and cultural patterning. In addition to the quality of life assessment, it should therefore be an important focal point for the design and implementation of clinical and public health policies. Methods: The present study was carried out to assess the self-perception of weight appropriateness as well as the quality of life of overweight and obese individual among the rural population particularly among housewives. A total of 421 respondents participated in the study which consisted of 36.6% in the overweight and 63.4% in the obese categories. Results: the analysis of the survey revealed that self-perception regarding obesity among respondents show common similarities, particularly in self reporting on health, dietary habit and also the concept of beauty and a beautiful body. Character and behavior are highly regarded in evaluating a person's self-worth in society. The results on the quality of life using the ORWELL 97 instrument show that the quality of life of respondents was moderate. Most of the respondents were aware of their body weight and indicated an intention to lose weight but also reported themselves as healthy or very healthy.The results of the survey indicated that perception on obesity did not differed very much between respondents, in fact there existed a lot of similarities in their perception about health, quality of life, personal health and self-satisfaction with own body. However, their quality of life was within the normal or moderate level based on the ORWELL 97 assessment. Even though most of the respondents were aware of their body weight and indicated an intention to lose weight they also reported themselves as healthy or very healthy, suggesting that public health messages intended for rural housewives need to be more tailored to health-related consequences of fatness.
Background Emerging, "game-changing" technologies create new interaction paradigms, usage situations, contexts, and intentions, and allow us to tackle challenges that were previously considered unsolvable. On the other hand, novel technologies and applications such as head-mounted-displays for everyday assistance, deep neural networks for classification of all kinds of data, or self-driving vehicles for increased comfort and safety, might create new threats, raise new concerns and increase social tension between users and non-users. While some of these technologies and interactions have become more perceptible to others , other technologies might be very discreet but cause discomfort and affect the social climate due to their presence or availability. --- Social acceptability issues may arise with emerging technologies in various contexts. Some examples are: Virtual Reality has become available and mobile, but social concerns might make it difficult to use VR with others around. Assistive devices need to balance the trade-off between being recognized as such to increase social acceptability and being unobtrusive to reduce stigmata. A user's experience of interacting with an interface not only comprises her actual personal experience, but also compounded by other people's perceptions: whether a device is considered "cool" or "weird" might influence impression management , and thus affect her willingness to use it -even when unwatched. Despite being highly useful and usable, some devices might also reveal information the user does not want to reveal, which might result in privacy breaches or stigmata or displaying interactions to bystanders [8]. In public spaces, interactions with an interface may affect or even intrude the social sphere of others, cause discomfort and social tension. In light of these, we believe that social aspects of technology usage need to be re-thought as one of HCI's quality characteristics, as the spread of information and communication technologies into all aspects of our lives has opened up many new trap doors to social acceptance -or non-acceptance, respectively. This workshop is intended to foster critical re-thinking of social aspects in the adoption of novel, interactive technologies, which is often embraced by "social acceptance" and "social acceptability". While these terms have been frequently used in the field of HCI, they have only been sparsely defined , and there are no agreed-upon metrics to measure their effects . However, we believe that in the context of emerging technologies and their dissemination into all facets of public and personal life there is a need to discuss how social acceptability issues shall be dealt with in HCI research: does an interaction or a technology have to be specifically designed for social acceptance, or will acceptance come naturally over time if the interface is accepted by 'everyone else'? Should tech companies hire "Social Acceptance Advocates"? What about engaging in technology-driven research resulting in products that might not become socially acceptable in a lifetime? We speculate that social acceptability might not be a simple, binary decision between "acceptable" and "unacceptable", but that decisions are also contextual, may be temporary, and influenced through media coverage or greater societal changes. For this reason, we believe it is high time to re-think and reconsider the notion of social acceptability in CHI in an interdisciplinary workshop with researchers and practitioners from academia and industry. The main goals of this workshop are three-fold. First, we explore how "social acceptance" and "social acceptability" are understood, encountered, and used in the CHI community and beyond. Second, we will gather method suggestions for how the social acceptability of an interactive system can be measured and evaluated in a comprehensive way. Third, we discuss what types of social acceptability research would be the most useful for those trying to design/develop for social acceptability. --- Existing Work In 1994 Nielsen named social acceptability as essential part of system acceptability [18]. Despite this, HCI research in the past decades mainly focused on creating and improving what Nielsen embraced as practical acceptability, including e.g., usability, and utility. Also, early observations, e.g., Hosokawa's Walkman Effect [10] were purely descriptive and did not aim to design for social acceptability. Technology acceptance research has been extended to incorporate social factors , but research and resulting models were influenced through the technology positivism of that time; Potential non-acceptance of technologies was not considered, however, has been taken up more recently in various areas of HCI: • Social acceptability of "performing" interactions in front of others has been investigated for mobile, gestural and on-body interfaces [1,16,21,23,24], speech interfaces [7], and public displays [19]. • Social acceptability of technology usage has been inverstigated for various contexts and situations [13] or by particular user groups, e.g., for accessability [20,25] or in medical use cases [4,27]. • Ethical and social implications of particular classes of technologies, were looked at e.g., for wearables [11], smart glasses [5], drones [26,14], lifelogging cameras [12] and CCTV [17], as well as discussed for ubiquitous computing in general [2]. • A further string of research e.g., by the University of Twente1 , covers intelligent personal assistants and human-robot-interaction. --- Workshop Goals We aim for a highly interdisciplinary workshop, bringing together designers, researchers, and practitioners from different domains of CHI to generate a shared understanding of "social acceptance" and "social acceptability" to discuss the implications of this for the CHI community. We aim to discuss which problems and challenges regarding social acceptance are being faced during research and design activities, along with solution strategies for mitigating risks of social non-acceptance of new HCI technologies and artifacts. We furthermore aim to initiate a discourse about which methods and metrics are suitable to comprehensively measure the social acceptability of an interactive system. We believe CHI2018 to be the ideal venue for this workshop as CHI invites an interdisciplinary dialogue between designers, researchers, and practitioners, and has had a long tradition in looking at social aspects of technology usage e.g., at what is "cool" [22] or "embarrassing" [6]. --- Workshop Questions Questions to be discussed during the workshop include, for example: -Which emerging technologies and their characteristics are particularly challenging with regard to social acceptability? -How can we develop/design for social acceptability? -What role does social acceptability play in the overall perception of system quality or user experience? -Which factors affect the social acceptability? What role do new interaction techniques play? -How would disappearing computers (c.f. Ubiquitous Computing visions) affect acceptance? -What are the needs to design for social acceptability; or is it something that is naturally achieved over time once a market gets used to the technology? -Where has research in the CHI community succeeded or failed in designing for social acceptability? -How can aspects of social acceptance be measured in valid and useful ways? --- Expected Outcome The main objective of this workshop is to provide a definition and common ground of what "social acceptability" is for the CHI community. A related practical outcome is the collection of existing methods to evaluate "social acceptability", as well as the ideation of new methods, measures or perspectives that are missing in existing theories. We further expect the workshop to set the scene for discussing the relevance of "social acceptability" of emerging technologies for the CHI community and chart a future research agenda for its systematic study. --- Participants and Expected Interest Social acceptance is an element that becomes often apparent in user studies, whether it was purposefully studied or not. For this reason the workshop aims to include both, those that are studying, tackling and working on social acceptability, and those that stumble across social acceptability issues when testing prototypes or deploying their products in the wild. Hence, to better incorporate diverse participation in the workshop we have decided to offer two submission formats: 1. position papers -to be presented as a poster and, 2. full papers -to be included as an oral presentation. The call for participation will be distributed via mailing lists, social media and our institutes' websites. We believe that the social acceptability of emerging technologies is of direct interest to all designers, researchers and practitioners who design, study or use interactive systems. The workshop has ties to various areas in HCI, including mobile, wearable and ubiquitous computing; interaction in public spaces; on-body interfaces; intelligent personal assistants and HRI; interactive and provocative design; and social software. It would also invite attendees having more general interests, such as information ethics; social computing or any psycho-social dynamics of HCI. --- Organizers The workshop will be organised by an interdisciplinary team of researchers from 5 different countries/universities. Shaun Kane is an assistant professor in the Department of Computer Science at the University of Colorado Boulder, where he directs the Superhuman Computing Lab. His research explores the design of mobile and wearable assistive technology, including how to empower end users to create and customise their own assistive devices. Susanne Boll is full professor for Media Informatics and Multimedia Systems at the University of Oldenburg . In 2012, she joined the board of OFFIS -Institute for Information Technology. Susanne Boll is a lead researcher in a number of international and national research projects in the field of intelligent user interfaces, and leads the Human-Machine Cooperation Competence Cluster, which drives the activities of the OFFIS research institute in this field. She has co-organized several international events, is member of several editorial boards, and has been a member of more than 100 Technical Program Committees. --- Pre-Workshop Plans Starting from December 2017 we will recruit a program committee to review and decide on successful submissions. Prior to CHI, participants will be asked to complete an survey on their understanding of "social acceptance" and "social acceptability" as well as relevant measures and metrics, and their experience with acceptable systems. Following a "snowballing" principle, the participants will be encouraged to recruit at least 8 additional participants each . Results of the survey will be presented in the workshop's opening talk. --- Workshop Structure The workshop is planned as a 1-day workshop with a structure as follows : Introduction and Ice Breaker : Introductory presentation to outline the workshop motivation and goals, summing up the results of the pre-workshop survey, followed by an ice breaking activity. Speed Dating : Following the "speed dating" procedure, participants will discuss their perspective on social acceptance in HCI, and related issues they might have encountered during their research activities. Session 1 : Participants present results of their research in 7 minutes each. Session 2 : Participant's presentations; identical format to session 1. Activities for the workshop's remainder will be discussed and agreed. Posters : Poster presentations, sharing experiences with socially acceptable interfaces. Group Session 1 : Participants will divide in groups based on interest and experience. Each group will target at one particular interaction paradigm or interface and redesign it in an either more acceptable, or totally unacceptable way. This way discussing factors that influence the social acceptability of a system will be facilitated. Group Session 2 : Participants will come together in different groups and discuss how social acceptability is or could be measured and evaluated. A list of existing methods and examples suggested by the participants will be prepared based on the pre-workshop on-line survey. Discussions : Participants will be invited to present and discuss their findings. Key research questions, implications for the CHI community and future directions will be discussed and summed up in a poster. Wrap-up and Closing Remarks : Workshop results and remaining open questions will be wrapped up, options for follow-up activities will be discussed. --- Post-Workshop Plans We will invite the participants to submit an extended version of their workshop papers to be included in a special edition journal. Outcomes of the method collection will be provided as overview on the workshop's website and in a joint survey publication. Where possible, questionnaires, metrics and tools will be made available open-source via github. --- Call for Participation What does social acceptance mean with respect to modern HCI? How to design for social acceptability and how to evaluate it? Where has research in the CHI community succeeded or failed in designing for social acceptability? The concepts of technology acceptance and social acceptability are central in the long development of human-centric understanding of interactive technology. However, considering the variety of modern ICT, the early definitions and theories related to the social and societal aspects of technology acceptance seem outdated and narrow. We invite academics and practitioners to discuss how social acceptance and acceptability are understood nowadays. We invite submissions of position papers: 2 pages in SIGCHI Extended Abstracts format to be presented as posters, or full papers: 4 pages in SIGCHI Extended Abstracts format to be presented as oral presentation. Possible contributions include, but are not limited to: Experiences, case studies, and lessons learned from designing socially acceptable interactive systems. Methodological contributions: conceptualizations, evaluation measures, design considerations, etc. Design/system contributions: interactive systems that provide socially acceptable qualities, provocative designs or breaching experiments. User Studies about social aspects of technology acceptance. The workshop participants will be selected based on the submissions' relevance to the workshop topic and their potential to engender insightful discussion at the workshop. For more information and submitting your contributions, please visit: https://www.socialacceptabilityworkshop.uol.de/
A central viewpoint to understanding the human aspects of interactive systems is the concept of technology acceptance. Actual, or imagined disapproval from other people can have a major impact on how information technological innovations are received, but HCI lacks comprehensive, up-to date, and actionable, articulations of "social acceptability". The spread of information and communication technologies (ICT) into all aspects of our lives appears to have dramatically increased the range and scale of potential issues with social acceptance. This workshop brings together academics and practitioners to discuss what social acceptance and acceptability mean in the context of various emerging technologies and modern humancomputer interaction. We aim to bring the concept of social acceptability in line with the current technology landscape, as well as to identify relevant research steps for making it more useful, actionable and researchable with well-operationalized metrics.
Background Slums are homes to a significant and growing proportion of the world's population. It is estimated over one billion people will soon live in such areas worldwide, with almost all of these in low and middle income countries [1,2]. The conditions that often define a slum, including a lack of access to clean water and sanitation, and of safe and durable housing, are also key risk factors for disease and poor welfare. Despite the increase in population and the significant health risks they face, slum residents remain an understudied population [2]. The NIHR Global Health Research Unit on Improving Health in Slums aims to examine the health care access and use, as well as to measure the health status, of slum residents in seven slum sites in four countries: Nigeria, Kenya, Pakistan and Bangladesh. A primary objective of the project is to complete a large-scale spatially-referenced survey of approximately 1000 households at each site. However, there is little guidance or consensus methodology on the conduct of such household surveys in complex urban areas like these, which frequently lack legal recognition, official censuses, or maps. Household surveys are a major research tool used to capture data about a population's socio-economic status, health status and behaviour, and other key characteristics. In order to conduct a survey, a sampling frame generally needs to be specified, i.e. the population from which the households are to be sampled. A complete sampling frame should list all possible households in the population of interest to ensure that the sampling method used can generate an unbiased sample. With a growing interest in spatial variation, geo-spatial analysis, and the consequences of spatial confounding , an increasing number of household surveys are spatially referenced, i.e. the exact location of the household is recorded, and samples are designed to be spatially representative as opposed to completely random. As an example, the Demographic and Health Surveys a set of standardized, nationally representative surveysare mostly spatially referenced [3]. The DHS and other similar surveys often use national censuses and the recorded population living in small enumeration areas or census tracts as a sampling frame. However, this may not be a sound basis for a sampling frame, particularly in the case of the slum context, for a number of reasons: the information from a census may no longer be accurate with respect to where people reside if a significant amount of time has passed, for example, population turnover in two Nairobi slums was found to be approximately 25-30% of the population annually; [4] censuses can fail to cover people who are transient or who live in informal or makeshift accommodation; and censuses may not contain accurate location information, or census tracts may not be fine-grained enough to accurately represent the population distribution in very densely populated areas. The slum context presents difficulties to conducting a valid household survey. This includes the aforementioned issues but also complex social and governmental relations that can complicate access. This may partially account for the disproportionately low level of research in these areas. In this article we propose a method to conduct a multi-site, spatially-referenced household survey in slum settings, which we illustrate with the specific example of the Slum Health Project. The study sites are seven slums across five cities in four countries: Lagos, Nigeria; Ibadan, Nigeria; Nairobi, Kenya; Karachi, Pakistan; and Dhaka, Bangladesh. We do not focus on the specific content of the surveys and the analysis of data generated from them, rather the aim of this protocol is to describe a generalisable method to produce a reliable and valid household survey in a slum setting, which can be replicated in new, similar settings. This includes accessing slum sites, constructing a spatially-referenced sampling frame, spatially-regulated sampling, and field data collection, management, and storage. The proposed method is intended to be relatively low-cost and sustainable, so that local residents may be equipped with the knowledge to continue to update maps, which in turn may be used to facilitate future research as well as for political recognition and planning. --- Methods/design Both 'household' and 'slum' have multiple technical definitions. Different agencies define 'slum' in different ways. For example, UN Habitat specifies that a slum area is "any specific place, whether a whole city, or a neighbourhood, [...] if half or more of all households lack improved water, improved sanitation, sufficient living area, durable housing, secure tenure, or combinations thereof [5]." While UNESCO use: "A contiguous settlement where the inhabitants are characterised as having inadequate housing and basic services [6]." In our context, the study sites are well-known 'slum' areas whose boundaries are defined by the communities themselves or agreed among relevant stakeholders. Regardless of the definition, though, our method was designed to be applicable to any complex, irregular, or unplanned urban environment. For this article a necessary condition to be defined as a "household" was to be living in the same housing unit or connected premises. Other criteria such as having common cooking arrangements are frequently used, but are not required by this method. The aims of the method described in this paper are to produce a valid spatially-regulated sample and to conduct a survey using this sample. Our sampling frame within each study-site is therefore the complete set of geo-located household locations. We describe firstly how access to slum sites might be obtained, secondly how a sampling frame is generated, thirdly how to sample from this sampling frame, and fourthly the process of collecting, managing and storing data from slum sites. --- Access to slum sites Negotiating and obtaining access to conduct research in slum sites, which are often physically challenging and socially complex informal environments, requires an in-depth understanding of the political and social structures at local, community, and national levels. Slums are heterogeneous and the political and social structures within them vary between countries, cities, and even within the same city where more than one slum is present . The first step is a stakeholder mapping and engagement phase for each slum site. This is required in order to negotiate and obtain access, both from local authorities and local community leaders. The identification of all the relevant stakeholders should be undertaken by a local research team with knowledge of the national and local policy-drivers/makers as well as the political and social structures of the slum communities and local governments. There is not necessarily an optimal method to conduct the stakeholder mapping and engagement. A focus-group approach to stakeholder mapping can be cost-effective, rapid, and easily adaptable to a wide range of contexts. However, it may not always be suitable in practice as it is not always easy to bring together busy people. Similarly, political sensitivities or social hierarchies may affect group dynamics and 'who' speaks. Therefore, a combination of focus groups and one-on-one engagements may be more practical and appropriate. The mapping and engagement exercise must consider each stakeholder's interests and influences based upon their agenda, power-base, credibility, and consequences of the research for them, to ensure successful engagement. Understanding the sociology and political economy in slums is a key objective of stakeholder engagement exercises; however that aspect of the research is beyond the scope of this article, and here we focus on the issue of engaging stakeholders to negotiate access and site entry. Table 1 provides some examples of access negotiations in the Slum Health Project. In all sites, we met with local community leaders and government officials, as well as NGOs and different community-based groups. The relevant government authority was the first point of contact followed by local community leaders. A "snowball" approach was taken, so that any additional stakeholders identified in the meetings were also engaged. Once access was negotiated, key stakeholders were kept appraised of all research activities, including times and dates when field workers would be present. --- Constructing a sampling frame In order to generate a spatially-regulated sample, the sampling frame must list all households in the area of interest and their precise locations. In high-income country settings, listings of households and their addresses are well-maintained, along with accurate detailed maps permitting the enumeration and geo-location of each household. Frequently, neither maps nor household listings exist for slums as the population is often not legally resident on the land, the structures that would be shown on maps are temporary and changeable, and there is little incentive for the state or private enterprise to produce maps. As a result there is a lack of information, formal or otherwise, about the location and function of structures and where households reside. Therefore, both a detailed map of all structures and a listing of households linked to locations on that map is required. Each country in the project formed a local mapping team, which included research staff and local community members and who were trained locally on each of the tasks described below. Figure 1 shows a simplified flow diagram of the processes to generate the sampling frame. --- Generation of digital map data from satellite imagery In this project, the slum boundaries were defined in collaboration with the local research team and slum community leaders. Official administrative or electoral boundaries can be used but can often be considered incorrect or out of date, particularly given the dynamic nature of the slum . Optical satellite images covering the study sites were procured from Airbus Intelligence at a resolution of ~30 cmnote that the resolution of freely available satellite imagery such as LandSat is insufficient for identifying the relevant features. --- Table 1 Illustrative examples of negotiation of access to slum sites --- Kenya The process for obtaining access to the two slum sites in Nairobi, Kenya firstly required engagement with the Nairobi City County Health Management team to inform them of the planned research. A research protocol and ethical clearance letter from the nationally accredited Ethical Review Board was submitted. Following review, a research authorization letter was issued by the research committee copied to the relevant sub-county authorities. Pre-requisite authorization for the project was also obtained from the National Commission for Science, Technology and Innovation . Subsequent meetings were held with the respective sub-county HMTs who made a recommendation to work with community health assistants who were conversant with different health service providers in the area. In addition, the local research team engaged with the local government chiefs based within the slum sites who arranged for meetings with Community Advisory Committees. The Community Advisory Committees comprising community leaders and representatives were briefed on the project objectives and given opportunity to air concerns regarding upcoming project activities. Access to the slum sites was granted during these meetings with CACs. Finally, an inception meeting was held in Nairobi to bring together county government officials, representatives from community advisory teams as well as NGO representatives in each slum site in order to explain the research project in more detail and how they can be involved as the research progresses. --- Nigeria Obtaining access to the three slum sites located in Ibadan and Lagos firstly required permission from the Governments of Oyo and Lagos States and to inform the chairpersons of the three relevant Local Government Areas . Once this permission was granted and advocacy visits had been made to the LGA chairpersons, researchers met with the local traditional leaders' council of each of the communities. The site in Lagos had one traditional chief, while one in Ibadan had a committee of several local chiefs, with one selected by them as spokesperson. The remaining site in Ibadan has two local chiefs, one for the indigenous Yoruba community and another for the sizeable migrant Hausa ethnic group resident there. The study was explained to each traditional chief-in-council and their cooperation for the data collection exercises to be carried out in the communities was sought. Researchers also met with health practitioners operating within the study slum sites, which included traditional healers, patent medicine vendors, clinic matrons and proprietors of health facilities to ensure they were aware of the study, willing to provide information and welcomed researcher involvement. Lastly, in order to gain access to information about health facilities in the areas of study, permission was obtained from the State Ministry of Health and the Medical Officer of Health in each of the LGAs. An online mapping platform was set up using the Humanitarian OpenStreetMap Team Tasking Manager, which is a free online Geoweb infrastructure for coordinating remote participatory mapping, i.e. generation of map data from satellite imagery by a varied team in multiple locations . The HOT Tasking Manager subdivides the area of interest into smaller grids, each referred to as a "Task" that can be selected by a participant and mapped . Local project teams were first trained before recruiting additional participants including slum residents, OpenStreetMap communities , and other project team members. Once the digital maps are completed and validated against the satellite imagery, they need to be validated through "ground-truthing", i.e. comparing the mapped features with observations on the ground. Each task is validated by an experienced mapper. The generated data are uploaded onto the OpenStreetMap online database [11]. --- Onsite participatory mapping The onsite participatory mapping is the "ground-truthing" stage: the accuracy of the digital map produced from the online mapping is checked in the field. This stage involves a number of steps. First, roads and footpaths are tracked in the study sites with handheld GPS devices to confirm their locations as mapped from the satellite imagery and produced in the digital map. Second, each structure is verified in two ways: if its geometry is incorrect, any changes are drawn on printed versions of the digital maps in the field, which are scanned and overlaid with the digital maps to make any corrections using the FieldPapers.org service; and third, each structure is surveyed using the digital data collection tools to generate unique identifiers for each structure, which are also marked indelibly on the structure for future identification, and to determine its function . --- Identifying households Where structures are identified as dwellings, each household as defined is recorded and identified by the name of the head of household or family name. High population turnover and lack of tenure or rental contract can result in households departing without notice. For the "Slum Health Project", any household that had not been observed by neighbours for a period of 3 months or more is no longer considered 'resident'. The spatial locations of the identified households are then linked to the relevant structure by the structure's unique identifier. The location of the household is specified as the structure's centroid. Once all structures are surveyed the sampling frame is complete. --- Sampling method As slums typically exhibit very substantial spatial heterogeneity, it is desirable that the sampled locations span the whole of the site. A geometrically simple way to achieve this is to sample the households at, or as close as possible to, the points of a regular lattice overlaid on the mapped site. However, this has the disadvantage that it is biased in favour of sampling relatively isolated households. A completely random sample removes the bias but also results in uneven spatial coverage of the site. These considerations led Chipeta et al. [12] to suggest using an inhibitory sampling design, in which sampled locations are chosen at random subject to the spatially regulating constraint that no two sampled locations can be less than a specified distance d apart. The packing density of an inhibitory design is the fraction of the site area occupied by discs of diameter d centred on each sampled location. The maximum achievable packing density depends on the spatial arrangement of the available locations, here households, but in high-density settings a value of around 0.4 produces a highly regulated sample. Inhibitory designs are generally efficient for capturing spatial variation on the scale of the whole site, but cannot neither capture small-scale spatial variation nor distinguish it from non-spatial variation amongst the individuals who populate the sampled households. For this reason, Chipeta et al. [12] recommended tempering an inhibitory design by including a number of close pairs, i.e. augmenting an inhibitory design with a number of sampled locations, each one of which is located less than a specified distance e from the closest point of the inhibitory design, with e < d. In this context "close pairs" is taken to be households residing in the same structure . The number of close pairs and value of d will determined based on data from pilots conducted at each study site of between 20 and 30 households, which are purposively selected to maximise spatial variation. Sample size in this context is often based on pragmatic considerations including resourcing and time. The "effective sample size" of a spatially correlated sample, i.e. the equivalently sized uncorrelated sample that provides the same information on a statistic of interest, is dependent on the degree of spatial correlation and the location of samples, among other things. Griffith [13] for example, provides a conceptual frame work for consider the effective sample size of spatially correlated samples. --- Data collection methods Data collection methods in a slum context are dictated by similar considerations as any other context: data quality, data security that came into force in the European Union in May 2018 [14]), ease of use, and costs. Based on the above criteria we recommend the use of digital tablet devices over paper-based forms as they reduce the risk of transcribing error, protect data security through encryption and not requiring the transport and storage of multiple paper forms, and reduces costs by not requiring extensive data entry. For the Slum Health Project digital tablet devices were purchased for all field workers and locked with a password. Field workers were required to sign agreements to use the tablets responsibility and all tablets are signed in and out. In terms of software, a number of both proprietary and open-source options are available, including Open Data Kit, RedCap, and Survey CTO. We opted for the open-source Open Data Kit suite of software, which will improve sustainability [15]. Importantly this software permits offline data collection, automatic encryption, and uploads all submissions when the device is connected to the internet. Form programming was completed using xlsform [16]. These tools permit complex survey design and skip structures, can restrict responses to reduce errors, and collects locations, signatures, and images as required. A data aggregation server was set up using a cloud server provider that permits full control of the server and location of data storage to which access was strictly limited. Access to the server was secured using password-protected 256-bit SSH keys. A second data storage server was set up at the University of Warwick, to permit access to the data for project members and constitute a backup. The data collection process is as follows : 1. Field workers conduct the interviews, completing the forms on the tablet devices. The responses are checked by a field supervisor for any potential errors. Additional quality control steps include spot checks by field supervisors, i.e. returning to a sampled house and re-asking a subset of questions, and sit-ins on interviews by supervisors. If any potential errors are identified, the field worker returns to confirm responses, otherwise the form is finalised. The software encrypts finalised forms using AES-256 encryption; following which forms are no longer accessible and are submitted automatically to the server when online and are deleted from the tablet ). 2. Encrypted data are stored on the data aggregation server. When requested, the data are decrypted, the unique identifiers of submissions extracted and checked against a list of previous submissions, and if there are new submissions the data are processed , and then re-encrypted using AES-256 encryption with a separate set of country-specific SSH keys. The data are then submitted to the data storage server via SFTP ). Any unencrypted data are deleted. 3. Data are stored encrypted on the data storage server until required and ). Quality checks are conducted and ). On completion of data collection, the data will be 'cleaned' and merged into one final data set. --- Discussion Conducting valid and reliable surveys in slum areas is a necessary but difficult undertaking. With few exceptions, such as the Map Kibera project [17] or the Nairobi Urban Health Demographic Surveillance System [18] both in Nairobi, Kenya, maps and censuses of slums are out-of-date or non-existent. However, a reliable sampling frame is necessary to select representative samples. Otherwise it is likely that the most transient or those not officially recognised as resident, who are also likely to have the highest levels of poverty and ill-health, may be missed. In this article we have detailed a method to create a spatially-referenced sampling frame consisting of a census of all households in a slum and thence a spatially-regulated representative sample. We discussed this in the context of a project investigating health care use and access in four countries. The method involves first generating a map using satellite imagery and then verifying and ground truthing this map. Once all structures are identified, their use is determined, and each household resident in each structure, if any, are identified. Households are located on the basis of the structure in which they reside. A spatially regulated sampling method is used. Digital data collection methods are also outlined. There is no reliable means to validate these methods since no 'Gold standard' or even comparator data exists for these settings in general. However, each component of the overall method has a strong precedent. Participatory mapping using satellite imagery to generate maps quickly, cheaply, and reliably is widely used across many fields. For example, humanitarian response teams frequently use this technique in areas affected by natural disasters for planning [19,20]. Ground-truthing to improve maps is also widely conducted and is considered necessary for valid maps. Ground truthing is an on-the-spot data gathering activity to verify what has already been collected in the past or remotely or to collect additional information. An example, in our context, is the Map Kibera project in Kenya where local residents mapped one of the largest slums in Africa to produce points of interest throughout the slum which eventually led to the GroundTruth Initiative [17,21]. The advantage of these tools are that they are simple to use and the Open Street Map software is open source and online. Only a computer or smart phone is required for data input. This enables the continued maintenance of the maps generated in this project by local communities, providing an additional benefit of the work. The growth of computing power has led to an increased ability to estimate complex statistical models that take account of spatial variation [22]. Observations are likely to be correlated with one another by virtue of their proximity, and not taking this into account may lead to biased estimates of population statistics or intervention effects. The spatial variation is also of interest in its own right for modelling disease prevalence and incidence and its relation to the environment [22]. Both of these types of analyses are particularly relevant in complex urban areas like slums which exhibit a high degree of spatial heterogeneity. Spatial-referencing is thus highly important and it is recommended for future work. Obtaining access to slums is a key part of the methodology. Understanding the political and social contexts and obtaining buy in to the work from local stakeholders is key to the success of any research. For example, there is evidence to suggest people may misrepresent themselves if they believe doing so will result in benefits to their community [23]. Community engagement is therefore required to access the slums, ensure the reliability of the results, as well as build trust, improve communication, encourage feedback, and identify and respond to community need [24]. At the same time, the mapping, groundtruthing and survey processes we describe are "powerful tools in and of themselves for community engagement [24]." We acknowledge there may be weaknesses to the methods discussed here. Populations living in slums can be highly mobile. Similarly makeshift structures are liable to be demolished and rebuilt. This may render the maps incorrect over a relatively short time scale. The validity and usefulness of these methods are therefore closely entwined with their sustainability: only by providing the community the training and tools to update and maintain their maps can it be ensured they will remain accurate after any project funding has ended. This article discusses methods for conducting a spatially-referenced household survey in slum areas, which are applicable to other complex urban settings. Much of the method that has been proposed mirrors that of household surveys in other urban areas, however, in general previous household surveys have been conducted where there are reliable sampling frames. This method builds on recent programme-specific efforts to map slum-based households : conducting surveys in slums requires flexibility and contextualisation in light of the mapping and research history of the site of interest and key stakeholders. Slum surveys remain rare despite the large and growing population of slum dwellers, who also carry the highest burden of disease and poverty. These methods provide a key example for future work in this area. --- ---
Background: Household surveys are a key epidemiological, medical, and social research method. In poor urban environments, such as slums, censuses can often be out-of-date or fail to record transient residents, maps may be incomplete, and access to sites can be limit, all of which prohibits obtaining an accurate sampling frame. This article describes a method to conduct a survey in slum settings in the context of the NIHR Global Health Research Unit on Improving Health in Slums project. Methods: We identify four key steps: obtaining site access, generation of a sampling frame, sampling, and field data collection. Stakeholder identification and engagement is required to negotiate access. A spatially-referenced sampling frame can be generated by: remote participatory mapping from satellite imagery; local participatory mapping and ground-truthing; and identification of all residents of each structure. We propose to use a spatiallyregulated sampling method to ensure spatial coverage across the site. Finally, data collection using tablet devices and open-source software can be conducted using the generated sample and maps. Discussion: Slums are home to a growing population who face some of the highest burdens of disease yet who remain relatively understudied. Difficulties conducting surveys in these locations may explain this disparity. We propose a generalisable, scientifically valid method that is sustainable and ensures community engagement.
Introduction Colorectal cancer is the third most common cancer and the second most common cause of cancer death worldwide . Early detection through screening is crucial in reducing mortality, morbidity, and treatment costs associated with CRC . Since CRC is a preventable cancer that meets the criteria for appropriate screening, it is essential to increase screening rates, particularly in high-risk groups . The United States Preventive Task Services Force recommends screening for CRC in all adults aged 50 to 75 years, either annually by stool-based tests with high sensitivity (fecal occult blood testing [FOBT]/fecal --- Materials and Methods The systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses as a guide. The identification of relevant studies began with systematic searches using PubMed and Ovid Medline. The search covered the period up to July 2022 and employed keywords such as "colorectal cancer screening," "South Asian," and "Asian Indians." The following MeSH terms were used: "colorectal neoplasm," "early detection of cancer," and "mass screening." The search yielded 75 articles: 33 from PubMed and 42 from Ovid Medline. Inclusion criteria included English-language articles published between 2000 and 2022, focusing on the South Asian population, and reporting barriers, facilitators, or interventions for CRC screening. Exclusion criteria included duplicate articles and those that did not meet the inclusion criteria. After removing duplicates, 49 articles were included for further screening for eligibility. Three more articles were identified through other methods as follows: One abstract was found through a Google search on "Barriers to Colorectal Cancer Screening in South Asians," and two articles were identified from citation . Two investigators initially screened the articles to determine their relevance to the study question and purpose. Each article was subsequently presented to the other reviewer for further assessment of appropriateness Figure 1. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Flow Diagram. screening. Patel et al. revealed that physician referrals were the most significant facilitator for CRC screening. Participants of both genders mentioned that physicians play a key role in encouraging CRC screening uptake . Furthermore, Choi et al. found that health professionals' recommendations were the only factor that significantly interacted with ethnicity in relation to facilitating CRC screening. Lofters et al. identified physician characteristics that acted as barriers to CRC screening, such as trained South Asian physicians, Caribbean/Latin American-trained physicians, and shorter time in independent practice. However, in the same study, physicians who were Canadian graduates, Eastern European trained, or had more experience contributed to higher CRC screening rates . In addition, the frequency of physician visits was positively associated with CRC screening . A lack of insurance coverage was cited as a barrier to CRC screening in four studies . Conversely, having insurance was identified as a facilitator for CRC screening in six studies . --- Cultural and Psychological Factors Four studies cited psychological factors as barriers to CRC screening. These psychological barriers include anxiety, fear, embarrassment, and the perceived shame associated with CRC screening and diagnosis. Almost half of the participants in one study reported shame/discomfort and embarrassment associated with CRC and CRC screening in their communities. One participant in the study thought the cause of CRC "may be dirty things, dirty people," and another participant stated, "a lot of people feel ashamed… for this reason they don't do it." However, despite these barriers, the benefits of CRC screening are widely acknowledged . Cultural and religious aspects have also been identified as factors influencing CRC screening . For example, fatalism was cited in three studies as a barrier to CRC screening, with participants stating that they had no control over cancer or when they were going to die. In Kazi et al.'s study, female participants expressed concerns about the cultural and religious stigma of stool tests, emphasizing that stool is unclean and dirty. Kazi et al. also reported that immigrants who do not have a culture of annual exams in their native countries hold the same attitudes in the USA. Ivey et al. found that some female participants hold cultural preconceptions that chronic diseases are more prevalent in men. --- Sociodemographic Factors The most common sociodemographic factors identified for inclusion or exclusion. The principal investigator remained available to address any disagreements that might arise between the reviewers. After evaluation, 32 articles were deemed eligible for inclusion in the study. The data collected included participant demographics, screening rates, barriers, facilitators, and interventions. --- Results --- CRC Screening The findings of several studies revealed that South Asians had lower CRC screening rates than other populations. However, in Wong et al.'s Glenn et al. reported heterogeneity among CRC screening rates in South Asian subgroups: Indian , Pakistani , Bangladeshi , and Sri Lankan . --- Knowledge of CRC and CRC Screening/Awareness Poor knowledge of CRC and CRC screening/ awareness was commonly associated with lower CRC screening rates among South Asians . Recently, Patel et al. reported that a lack of information and knowledge regarding CRC and CRC screening is the most frequently cited cultural challenge factor. In another study by Manne et al. , the level of awareness of CRC screening among individuals in the South Asian population was relatively low: 54.6% had never heard of FOBT, 73.9% had never heard of FSIG, 31% had never heard of colonoscopy, and 29.5% had never heard of any of these tests. Compared with white British adults, South Asians in the United Kingdom showed a pronounced lack of knowledge about CRC screening . --- Health Care Factors Health care factors that can influence CRC screening include physician recommendations, frequency of physician visits, and insurance coverage. A lack of physician recommendation was reported as a barrier to CRC screening in three studies and one abstract . In Kazi et al.'s study, the participants' top reason for not receiving CRC screenings was a lack of communication from physicians regarding the necessity of screening . In contrast, five studies Nonparticipation of FOBT uptake showed a linear association across the distribution of SES of five quintiles . Self-efficacy and response cost were statistically significant direct predictors of participation. There were significant negative effects of age on self-efficacy and gender on response cost and a significant positive effect of gender on severity. --- Gender Three studies showed that South Asian women are less likely than men to undergo CRC screening. This was attributed in one study to modesty and feelings of shyness regarding the exposure of private body parts to a male physician . However, Wong et al. found that males were significantly less likely to ever receive FOBT CRC screenings but were more likely to be up to date on colonoscopies. Negative attitudes toward CRC screening were also found to be more prevalent among men . The odds of undergoing CRC screening were shown to be higher among females in one study , but were higher among males in another . --- Length of Stay The length of stay in the United States for South Asian immigrants affects their likelihood of undergoing CRC screening. Shorter lengths of stay were associated with lower CRC screening rates among South Asian immigrants in the United States . Rastogi et al. found that those who had lived in the US for less than 5 years had an odds ratio of 0.44 of up-to-date colonoscopy. Wong et al. showed that South Asian immigrants who had lived in the US for less than 15 years had an odds ratio of 0.56 of ever having FOBT screening for CRC. Manne et al. also reported that living in the US for less than 5 years was connected to decreased CRC uptake. Mukherjea et al. showed non-adherence to CRC screening to be higher in immigrants with less than 40% of their life spent in the US. Living in the United States for at least 5 years was linked to a ninefold increase in the likelihood of having an endoscopic screening test compared with living in the US for less than 5 years . An explanation suggested that it takes at least 5 years to obtain a permanent resident card, which allows eligibility for free or subsidized healthcare. Finally, non-adherence to CRC screening guidelines was lower among South Asians who had lived in the US for > 60% of their life . --- Education Higher education was identified as a facilitator of CRC screening . Menon et al. also found that individuals who received sigmoidoscopy were more likely to have at least a high school education . --- Language Barrier Language is a key factor associated with CRC screening, especially in English-speaking countries successful recruitment despite reasons for refusal; 2) problematic items and response categories; and 3) computer/tablet limitations. Principle component analysis identified 14 components that explained 68.7 % of total variance; 34 items were retained after factor analysis. The internal consistency of 4 scales ranged from 0.79-0.91. There was significant differences in perceived barriers scale scores 95% CI,p < 0.0001) between those who participated and those who did not participate in screening. --- Lofters et al. --- Canada -Ontario Ecological study --- 76, 314 Utilizing a geographic information system, including Local Indicators of Spatial Association using GeoDa software to identify regions with a high population of South Asians and low level of CRC screening The approach allowed investigators to quickly target areas that would most benefit from CRC screening initiatives Lofters et al. Canada -Ontario Participating physicians were selected for a quality improvement initiative within their offices that they feel could increase willingness to screen and cancer screening uptake 4 primary care physicians, 60% of their patients were SA One-on-one cancer screening education by Health Ambassadors at physician office waiting rooms, educational videos in the waiting room, pamphlets, telephone patients overdue for CRC screening Physicians 1, 2, and 3 offices intervention by one-to-one education by HA, handed out pamphlets, and/or educational videos were not able to reach many patients. Phone calls by HA to patients who are overdue for CRC screening in Physicians 1,2, and 3 offices were able to reach a bigger population. At Physician 4 office, one-on-one education approached by HA, a total of 65.2% of SA patients spoken were willing to be screened The proportion of older adults participating in FIT testing was significantly higher among intervention compared with controls those who took culturally sensitive brochures are willing to share information with family and friends . 86% of those who had not screened agreed with a state that they would be screened al., 2015;Mukherjea et al., 2022, Palmer et al., 2015;Rastogi et al., 2019). Thompson et al. found that patient-physician language discordance significantly decreased the odds of CRC screening . Non-adherence to CRC screening by USPSTF guidelines was lower among South Asians who spoke English at home . --- Socioeconomic Status Poverty level and employment status were identified as factors commonly associated with CRC screening rates. Unemployed or retired Asian Indians were less likely to get screened for CRC compared with their employed counterparts . Low income was described as a barrier to CRC screening in six studies . Menon et al. found that income was a predisposing predictor of endoscopic cancer screening . Moreover, Orbell et al. discussed that nonparticipation in FOBT uptake showed a linear association across the distribution of SES, with rates across five quintiles of 29.8%, 24.4%, 21.3%, 23%, and 18%. --- Interventions In this review, ten articles related to interventions were examined . Six of the intervention studies were based on education . The implementation of various interventions for increasing CRC screening has been shown to have positive effects on screening behavior. Cullerton et al. stressed that culturally tailored educational programs improve attitudes and increase knowledge of CRC and intent toward CRC screening. In the study, prior to the session, only 25% of participants reported an intent to undergo FOBT screening, whereas after the session, this number increased to 49% . Manne et al.'s study included one-on-one sessions that were highly rated by participants. The intent-to-treat analysis showed a 30% uptake in CRC screening at the four-month follow-up, along with an increase in knowledge and a reduction in barriers to screening, such as worry about the screening process. Physician-led presentations on CRC screening were shown to have a positive impact on those who had not previously been screened, with 87% of participants expressing a high intent to get screened after the presentation . The distribution of culturally sensitive brochures has been shown to increase awareness of CRC screenings among family and friends. So et al. showed that a multifaceted intervention consisting of a CRC screening presentation by a trained instructor, distribution of booklet information translated into different languages, and targeting younger family members in encouraging their older family members to undergo CRC screening resulted in higher FIT rates among participants in the intervention group compared to the control group . Community-based education events have also been effective in increasing knowledge and improving attitudes toward CRC screening. A follow-up conducted 6-12 months after the intervention showed that 78% of those who received the intervention had been screened in the last 12 months, whereas only 37% were screened for CRC with any tests prior to the intervention . In another study by Lofters et al. , health ambassadors making phone calls to patients who were overdue for CRC screening at Physicians 1, 2, and 3 offices showed greater success in reaching a larger population compared to having HA provide education at physician's offices. However, at Physician 4 office, one-on-one education by HA showed that 65.2% of the patients spoken to were willing to be screened. Organized CRC screening programs have also been effective in increasing screening rates. Ghai et al. showed that almost all subgroups met an 80% target of being screened, including Asian Indian, Chinese, Vietnamese, Korean, Filipino, and Japanese populations. The program included mailed FIT kits to eligible health plan members aged 51-75 who were not up to date with CRC screening by colonoscopy or FSIG. These individuals were reminded through office/preventive health visits, mailed letters, and telephone messages. However, using Electronic Health Record Reminders was not shown to have an impact on CRC screening, although there was a 30% increase in odds for cervical and breast cancer screening. Lofters et al. explored the use of Geographic Information System, including Local Indicators of Spatial Association using GeoDa software to identify regions with a high population of South Asian individuals and low levels of CRC screening. This method was able to identify high-risk areas consisting of multiple neighboring censuses with low screening rates and large SA populations. --- Discussion This systematic review examines the barriers and facilitators to CRC screening among South Asian immigrants . While overall CRC screening has increased in recent years, ranging from 64.7% in 2016 to 68.6% in 2018 for ages 50-75 years , South Asians have notably lower rates, ranging from 52% to 61.2% . These rates are the lowest among other ethnic groups, such as non-Hispanic white, non-Hispanic black, Hispanic, and other Asian groups . According to the literature, the major barriers to CRC screening among South Asian immigrants include poor knowledge/awareness of CRC and CRC screening , lack of physician recommendation, psychological factors , cultural/religious factors, and sociodemographic factors . In contrast, the greatest facilitators for increased screening are physician recommendation, frequency of physician visits, and having health insurance that covers CRC screening. A key aspect of interpreting the above data is recognizing that Asians, including Asian Americans, do not represent a homogenous group . Predictors of CRC, therefore, differ among Asian subgroups as follows : Filipinos ; Chinese ; and Asian Indians . The heterogeneity among Asian subgroups is also reflected in the disparities in CRC screening rates, where South Asians often have the lowest rates. Moreover, the South Asian population comprises heterogeneous subgroups, and subgroup screening rates differ by religion . Therefore, further research is necessary to better understand the screening patterns of the Asian population, including cultural and structural barriers to receiving care. In light of the significant gap in CRC screening rates in the South Asian population, the review recommends three areas of improvement: increased education, increased frequency of recommendations, and strategic planning of interventions. --- Targeting Underserved Regions Most at Risk Identifying underserved regions most at risk to target for increasing CRC screening rates is critical to intervention success. The greater the intervention uptake in a low-prevalence screening area, the greater the opportunity is for uptake. Although many studies examined CRC screening rates in South Asians, a few employed innovative approaches . For example, Sy et al. examined CRC screening prevalence utilizing Medical Expenditure Panel Survey National Data, and Lofters et al. used population-level data to identify underserved areas. Lofters et al. also highlighted the need to focus on both patients and physicians, as physicians of certain demographics and educational backgrounds tend to screen their own patients at lower rates. However, their analysis was limited to a particular region . --- Increased Education About CRC Screening A lack of knowledge is a major barrier to CRC screening in SA communities . This lack of knowledge stems from various factors, such as a belief that screening is unnecessary or a lack of awareness about the disease . Furthermore, this is exacerbated by the fact that preventative health is not emphasized in the South Asian community, which is critical for a disease like CRC that often presents in later stages . Addressing this core issue remains incredibly challenging. Moreover, there is no clear consensus on the best educational medium. Passive and active approaches can be successful . For example, resources such as brochures tailored to South Asian culture have been shown to be beneficial . These resources ultimately enable patients to learn on their own time without fear of judgment. Active engagement, such as support groups, religious activities, or community events, can also have a positive impact on education . South Asian women, in particular, found that open group discussions were essential for discussing CRC screening , possibly because of the stigma surrounding the topic . --- Emphasis on Cultural Sensitivity The literature indicates that community/religious leaders should be involved in the process of promoting CRC screening, regardless of the medium used. Several articles emphasized the importance of utilizing a culturally sensitive approach . The reasoning behind this appears to be that community/ religious leaders are often in tune with the interworking of their constituents at the population level , and their respect and influence can significantly sway their constituents to undergo screening. As previously noted, understanding the gender composition of groups is also critical. Gender-specific focus groups or matching patients with physicians based on gender can increase patients' receptiveness to hearing about CRC screening . The effectiveness of patient resources is contingent on patients' willingness to receive information, and their level of comfort with the medium used. --- Limitations A limitation of this review is the small number of studies on CRC screening among South Asian immigrants that met the inclusion criteria. Additionally, several studies included South Asians in a homogenous study group, which may obscure the heterogeneity of this population. Caution should be exercised when interpreting the generalized results of Asian groups. Furthermore, Asian Indian participants were overrepresented in most studies, which could disproportionately influence the data of other South Asian groups, such as Pakistani, Bangladeshi, and Nepalese. Further research is required to identify differences in barriers among subgroups of the South Asian population and to evaluate the efficacy of interventions. Another limitation is that the majority of studies on CRC screening are US-based, which makes generalizing barriers and facilitators to CRC screening for South Asian immigrants worldwide challenging. In conclusion, the prevalence of CRC among South Asian immigrants in the United States is comparable to that of the general US population, and this holds true for South Asian immigrants in Canada and the United Kingdom compared to the general populations of those countries. However, South Asians have among the lowest CRC screening rates across all ethnicities in the US. Given that screening is critical for the early detection and treatment of CRC, healthcare providers and community leaders must assist South Asian immigrants in overcoming screening barriers. Poor understanding of CRC and CRC screening, lack of physician recommendation, cultural/religious factors, psychological factors, language barriers, lower income, and female gender are the barriers identified in this review. Physician recommendation, regular physician visits, and having health insurance are facilitators to getting screened. To increase CRC screening among South Asian immigrants, it is essential to help them obtain health insurance. Furthermore, healthcare providers, educators, and community leaders must offer culturally sensitive education to South Asian immigrants on CRC and strongly encourage them to undergo screening. --- Author Contribution Statement All the authors contributed to the preparation of the final manuscript. --- Conflict of Interest The authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter discussed in this manuscript.
Objective: The purpose of this systematic review is to broaden our knowledge of colorectal cancer (CRC) screening in South Asian immigrants living in Canada, Hong Kong, the United Kingdom, the United States, and Australia by determining the barriers and facilitators and examining interventions for CRC screening. Methods: A literature search of PubMed, Ovid Medline, and Google was conducted using South Asian, Asian Indians, cancer screening, colorectal neoplasm, early detection of cancer, and mass screening as search terms. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Only research articles written in English from 2000 to July 2022 were collected. Inclusion criteria included all English-language articles, the South Asian population, and either reporting barriers, facilitators, interventions, or recommendations for CRC screening. Exclusion criteria included all articles that did not meet inclusion criteria or were duplicates. A total of 32 articles were deemed eligible for inclusion and were retrieved for further analysis. The countries of origin in the articles reviewed included Canada, Hong Kong, the United Kingdom, the United States, and Australia. Results: In general, the studies indicated that South Asians have low CRC screening rates. The most common barriers reported were poor knowledge/awareness of CRC and CRC screening, lack of physician recommendation, psychological factors (e.g., fear, anxiety, and shame), cultural/religious factors, and sociodemographic factors (language barrier, lower income, and female gender). The most important facilitator reported was the physician's recommendation. Six intervention studies of either education or organized screening programs were shown to have a positive influence by increasing knowledge and improving attitudes toward CRC screening. Conclusion: Of the limited number of studies identified, the population categorized as South Asians was largely heterogeneous, including a diversity of ethnicities. Although the rates of CRC among South Asians were relatively low, there remain many cultural barriers to the awareness of and screening for CRC in this population. Further research in this population is needed to better identify the factors related to CRC in individuals of South Asian ethnicity. Recommending CRC screening by physicians and mid-level providers and educating patients with culturally sensitive programs and materials are important to increase knowledge and awareness of CRC and CRC screening.
to obtain their needs from more distant stores compared to households relying on public transportation. Thus, while the number of stores nearby and the physical distance could provide indications of levels of access during normal times, they are inadequate in examining disparities in accessibility during the preparedness stage. Due to disruptions in road network 21 and power outages, as well as supply chain impacts, the time span for returning to pre-event conditions can be a few weeks to more than a year. The extended period of disruption can significantly impact sub-populations who are unable to access facilities due to increasing trip distance and duration. The majority of the current literature lacks the nuanced approach to accurately measure accessibility to grocery stores given the unique nature of the disaster context and through physical and social factors. Previous literature defines access based on the number of available stores 6,[22][23][24][25][26] or a combination of number of stores and distance from a specified area 21,27,28 . However, these measures are primarily based on physical distance and only partially characterize access to facilities. Approaches focused on physical distance, as done in the majority of current studies, is not a reliable measure of accessibility during disasters as disruption influences the availability of stores and duration of trips. To begin, less distance from a household to a grocery store may imply better accessibility since the household does not have to travel as far. However, examining distance as an isolated variable limits the holistic perspective needed to understand equitable accessibility. For instance, households living in more affected areas may have a higher number of closed stores due to storm damage which decreases the availability of stores. Flooding events also exacerbate traffic congestion which significantly increases the total trip duration. Although a household is normally a set distance from certain grocery stores, traffic congestion from the closed roads or even construction repairs in damaged roads could significantly increase the travel time. Though these households may have similar distances to grocery stores, they may still spend more time reaching a grocery store to meet their needs. Failure to account for these multiple dimensions of access limits the characterization of access to facilities both during normal times as well as during disasters 6,23,24 . Thus, the research channels the unique challenges brought on by disruption to properly capture the multiple dimensions of access. In contrast to previous approaches, the research will use a combination of access indicators to better understand characteristics of access. A more nuanced characterization of access based on measures related to population-facility network interactions reveals a more accurate picture of access. Therefore, there is a multitude of physical and social factors in the disaster context that impact the equitable access to grocery stores. The research will specifically focus on the inaccessibility of grocery stores through multiple access indicators while accounting for the specific disproportionate impact on socially vulnerable populations. While the extant literature recognizes importance of access to facilities for community resilience in disasters [29][30][31] , there is limited empirical and observational insights to inform about the impacts of disasters on access to grocery stores 5,32 . In order to address the knowledge gap of using empirical and observational data to capture accessibility, the research presents an innovate method which harnesses and analyses location-based data to examine the equitable access to grocery stores during different phases of the disaster. In this study, we constructed and analyzed the population-facility network for examining access disparities to grocery stores in the context of Hurricane Harvey in 2017. In reviewing literature of accessibility to critical facilities and functionality of critical systems, we characterized access to facilities based on three distinct but complementary dimensions: redundancy in access; rapidity of access; and proximity of access. Redundancy is based on the number of unique stores visited as in the number of available stores open to consumers. This greater availability of critical facilities can lead to increased potential accessibility to important resources and needed services 22,23,26 . Rapidity of access is based on the duration of trips to stores which accounts for traffic delays and road disruption. Quicker trips can represent the functionality and restoration of the system [33][34][35] Proximity of access is based on distance to stores visited. Shorter distances could mean that consumers do not have to travel as far to reach important resources 27,28 . Accordingly, this research aims to answer the following research questions: What are the characteristics of access to grocery stores for different sub-populations at different stages of disasters? What is the extent of disparity in access among different sociodemographic groups in different disaster phases? What are the contributing factors to unequal access to grocery stores, and to what extent are facility distribution inequities, such as food deserts, indicative of access disparities? To address these research questions, we defined three distinct indicators for examining access and quantify these indicators based on location-based data related to people's visits to grocery stores in the context of Harris County, Texas. --- Materials and methods We examined disparities in access to grocery stores based on the structure and attributes of population-facility networks. The three steps of this methodology were: specifying distinct indicators for examining different dimensions of access, analyzing variations in the access indicators among different sub-populations in three phases of the disaster, and evaluating the factors influencing access disparity. First, we utilized nodelevel and link-level access metrics to capture different aspects of access in population-facility networks: unweighted degree based on the number of visited grocery stores, weighted degree based on the travel time, and weighted degree based on trip distance. Second, we evaluated fluctuations in patterns of each measure across different spatial areas to find variations in access at different stages of the disaster. Third, we specified disparities in access to grocery stores in the face of a disaster and evaluated factors contributing to such disparities. --- Study context. This study examined the access to grocery stores in the context of Hurricane Harvey, which made landfall in Harris County, Texas, in August 2017. The storm dropped more than 60 inches of rain, triggering immense flooding and causing severe infrastructure disruptions 36,37 . Harris County area is prone to flooding impact and is frequently impacted by service disruptions caused by flooding events such as transportation disruptions and power outages. The direct flooding impact to the grocery stores and the loss of access due to disruptions in infrastructure systems such as road inundations and power outages caused significant disturbance to people's access to grocery stores. Furthermore, Harris County, within which the city of Houston is located, is a metropolitan area encompassing populations of a diverse range sociodemographic characteristics, providing a representative testbed for examining the equitable access to grocery stores in the face of natural hazards. --- Data. Data sources collected and analyzed in this study include location-based mobility data from Streetlight Data, points-of-interest visit data from SafeGraph, sociodemographic information from a 5-year estimate of the American Community Survey of the U.S. Census Bureau, Federal Emergency Management Agency Flooding Data, and the Food Access Research Atlas developed by United States Department of Agriculture. These data sources were aggregated at the census-tract level to construct the population-facility network models and to calculate access indicators and subsequent analyses. A detailed description of these data sources is provided below. Location-based mobility data. The mobility data are provided by StreetLight Data, a commercial platform that provides origin-destination analysis data. Our analysis aggregated anonymized data from cell phones and GPS devices to create travel metrics, such as duration and distance 38 . The O-D network of visits to grocery stores in Harris County was examined in this study from August 1 through September 30, 2017. Figure 2a shows the network of trips from traffic analysis zones to grocery stores during the second week of August 2017. These data incorporate the trips using different modes of transportation, including personal cars and public transit. By analyzing more than 40 billion anonymized location records across the United States in a month and enriching the analysis with other sources, such as digital road network and parcel data 39 , StreetLight Data is capable of reaching approximately 23 penetration rate 40 , covering distinct census divisions in North America's Facility location data. POI data by SafeGraph were used to identify the location of facilities in this study. Saf-eGraph obtains location data by partnering with several location-based mobile applications. Data includes the basic information about the names, geographical coordinates, addresses, and North American Industry Classification System 41 . In this study, the top facility categories related to grocery stores, specialty food stores, and general merchandise stores were considered as the grocery stores in Harris County. Then we manually filtered the facilities to ensure that these POIs are stores used by the Harris County residents to obtain grocery needs. Figure 2b shows the distribution of these POIs in Harris County. Sociodemographic characteristics. Sociodemographic characteristics of census tracts were determined by collecting data from the demographic characteristics estimate over the 2015-2019 period of the American Community Survey. The sociodemographic characteristics of the TAZs were determined from their overlap with census tracts. Figure 2c shows the income level of the TAZs using the census tract data from the 5 year estimate of the American Community Survey. Sociodemographic characteristics, such as socioeconomic status, household composition, minority status, and transportation from the census data, were used to examine grocery store access disparity among different sub-populations. Flooding data. The Harvey Flood Depth Grid/Flood data from FEMA were adopted in this study to identify the flooded areas in examining the impacts of flooding on access 42 . The dataset was developed using the gage points from the National Weather Service and the terrain data from USGS. In this study, the flood data were examined to determine the flooding extent in each TAZ. Then those areas with a flooding extent greater than 6.5% of the area, which is the 75th-percentile flooding level in Harris County, were marked as flooded to further examine the relationship between flooding status and disrupted access to grocery stores. Food access. The Food Access Research Atlas developed by USDA was used to identify areas labeled as having poor access to supermarkets in Harris County. According to the definition provided by the atlas 43 , a census tract is considered to have poor access to grocery stores if at least 500 or 33% of residents live more than 1 mile or 10 miles from the closest supermarket, supercenter, or a large grocery store. This information was implemented to interpret the findings and to understand the extent that poor access to grocery stores contributes to the variation of access in the face of a disaster. --- Method. The analyzed data is an aggregation of the trip information from the users in home TAZs to the visited grocery stores. Based on the duration and time that a user spends in a particular location, Streetlight determines the home TAZ of the user. As an example of the aggregation, Streetlight first processes that there are 100 users residing in home TAZ A. Each user has a unique, one-way trip which is used to calculate the duration and distance between the home TAZ to the visited grocery store. An aggregated outcome could be that 40 users from home TAZ A attended Grocery Store A and 60 users from TAZ A attended Grocery Store B, where each user has a distinct duration and distance for their trip. First, we constructed the population-facility network models of the study area. Population nodes are TAZs, facility nodes are individual POIs, and the links represent trips. Three separate network models were created: one with unweighted links; one with weighted links representing trip duration; and one with weighted trip distance links. Accordingly, network-based metrics were used for examining the spatial and temporal patterns of access to grocery stores. Then, the variations in access to grocery stores during the different disaster phases were analyzed to characterize access to grocery stores, to examine access disparities, and to identify the factors contributing to such disparities. --- Access indicators. In this study, we used network metrics for quantifying three distinct access indicators. These metrics, derived from the population-facility network models of visits to grocery stores, comprise topological and structural properties of the network of grocery visits. The three access metrics are: the unweighted degree of TAZs ; weighted degree of TAZs: trip duration ; and weighted degree of TAZs: distance to stores . These metrics capture complementary characteristics of access, and are indicators of different access dimensions. The unweighted degree of TAZs captures redundancy for having access to grocery stores; the weighted degree based on duration captures rapidity of access and the weighted degree based on distance captures proximity. Table 1 summarizes the access indicators, dimensions of access, and the equations for measuring them based on the population-facility spatial networks. In these equations for calculating metrics for each TAZ , a ij are the elements of the adjacency matrix and di_w ij and du_w ij are the link weights based on the distance and duration of the trips from the TAZs, respec- tively, while k i represents the total number of trips. The access indicators capture properties of access in different phases of the disaster. The number of unique visits to POIs informs about available options, which could provide redundancy in the face of disasters. Other access indicators , which capture the rapidity and proximity access dimensions, are influenced by human activities and infrastructure conditions . Table 2 summarizes the interpretation of these access indicators during the normal period, preparation, and impact/short-term recovery period. The normal period establishes the baseline access indicators. However, the preparation and impact/short-term recovery interpretations focus on the variation in access compared to the baseline period. This comparison with the baseline period provides insights regarding the effects of disturbance on residents' access to stores during the preparation and impact/short-term recovery period. Examining spatial and temporal patterns of variation in access. The examination of access to grocery stores based on the adopted access indicators was conducted across three disaster phases: prior to the event , during the period from the formation of Hurricane Harvey to prior to landfall , and during the landfall and recovery. The spatial and temporal patterns of impact and short-term recovery are evaluated to discover disparate access during different disaster phases. Then unsupervised learning approaches for time-series clustering characterize patterns of impact and shortterm recovery on access indicators. Then, we performed statistical analysis to evaluate the influence of different factors on residents' access to grocery stores at different stages of disaster. The analysis steps are explained in the remainder of this section. Constructing the population-facility spatial network. First, we constructed the population-facility network of commutes from TAZs to grocery stores . This TAZ-POI network is a directed and bipartite network mapped based on the geographic coordinates 44 . The constructed network represents trips from TAZs to POIs as the links with weights based on distance and duration metrics. The three access indicators are determined based on a daily aggregated trip numbers on each link between TAZ and POI pairs. Calculating the percentage change of access indicators. The percentage change of each access indicator is calculated based on comparing the daily values with a defined baseline. The baseline period for each indicator at a TAZ is calculated considering each day as a unit, as a weekly pattern was observed in the data. The baseline period includes three weeks to define the weekdays' baseline values. Then the percentage change for each TAZ was calculated based on the following equation: where, Pc i,d is the percentage change in the access indicator at TAZ in a date . M i,d is the access indica- tor, and B i,d is the calculated baseline value for weekday corresponding to the date for determining percentage change. Then the resulting time series is used for examining the variations in access to the grocery store across different phases. Dealing with missing data. To deal with missing data on certain dates for the three indicators, we first filtered those TAZs which did not have data for 5 consecutive days. Then, the Kalman imputation method was applied Measures the average trip duration in seconds. Pc i,d = M i,d -B i,d B i,d j du_wij ki , k i = j a ij Proximity Distance Average weighted degree of distance. Measures the average trip distance in miles. Increased redundancy means more visits compared to baseline due to higher preparation levels. j di_wij ki , k i = j a ij Increased redundancy compared to baseline means greater recovery activity. --- Rapidity Duration Greater duration means a lower rapidity and longer commutes to grocery stores due to longer distance or traffic congestion. Reduced rapidity means a greater duration compared to baseline due to more time spent obtaining supplies as a result of longer commutes and traffic congestion. A lower rapidity compared to baseline means greater impact to access . --- Proximity Distance Greater distance means a lower proximity and thus greater distance from stores. Closer proximity means greater distance compared to baseline, which means greater effort for obtaining supplies. A lower proximity compared to baseline means greater impact to access . to the data to deal with the missing data in the time series. The ImputeTS package in R was used to implement the algorithm on the univariate time-series for all the access indicators 45 . The Kalman filter method uses the structural time series ideas where the system is outlined by a well-defined model with unknown parameters 46 . The maximum likelihood approach was implemented to determine the time-dependent model parameters 47 . --- Time-series clustering of TAZs based on their access indicators. We implemented time-series clustering algorithms on the access indicators to characterize and examine the spatial and temporal variations. First, a 3 day moving average was applied to the time-series data of each access indicator to extract trend and limit noise. Then, a partitional clustering using dynamic time warping distance in dtwclust in R 48 was used to perform the time-series clustering. Partitioned procedures were considered to be optimization problems that maximize the inter-cluster distance and minimize the intra-cluster distance 49 . DTW is a widely used approach for defining the distance in time-series data 50 . In this study, we implemented a multivariable time-series clustering approach for classifying the TAZs. The 3 day moving average time series related to the access indicators for each TAZ were used to identify the TAZ clusters. Then, two cluster validity indices, COP and modified Davies-Bouldin, were used to determine the number of clusters 51 . --- Results The calculated access indicators for the TAZs were used to examine the access at three disaster phases: pre-disaster condition, during preparation, and during disturbance and recovery of the affected areas. The analysis covers a period between August 1 and September 15, 2017. The pre-disaster period covers the period immediately prior to landfall; however, no disturbance had occurred to the residents' access to grocery stores. This period captures access to grocery stores during normal conditions. Following the formation of Hurricane Harvey and the issuance of the hurricane watch on August 23, preparation activities were initiated, and some grocery stores were out of stock due to increased demand. Finally, Hurricane Harvey made landfall in Harris County on August 25, 2017, causing several road inundations and disrupting access to grocery stores, which affected Harris County residents' access. Pre-disaster normal period. To understand the key characteristics of the access in the normal condition, we analyzed the period before the Harvey landfall. Dynamic clustering was used to identify the distinct clusters of TAZs with similar access characteristics for each indicator. The results show the patterns of access to grocery stores for the different TAZs in Harris County. Figure 3 depicts the identified clusters for the three access indicators together with the maps of these clusters . These plots also show a spatial clustering pattern which suggests that areas in the proximate TAZs have similar access characteristics. The number of visited stores indicator, which captures the Redundancy dimension of access, shows a spatial clustering pattern in some areas. The TAZs in the East and South of Harris County show a lower number of visited stores, which translates to a low redundancy of access to grocery stores. The difference in the redundancy in access to grocery stores could be related to the number of nearby stores, the size of the stores, and households' lifestyles and shopping behaviors. On the one hand, the more stores available in the proximity of a TAZ, the more the visits would take place during normal conditions. On the other hand, the size of stores could also affect the frequency of visits to stores, as larger grocery stores with a larger, more varied inventory could meet diverse needs of the households; one visit would be adequate to satisfy the weekly needs for groceries and other supplies. Furthermore, some households rely on coupons or have lifestyles that require them to follow certain shopping behaviors that affect the number and type of grocery stores they visit. Therefore, the properties of the facilities, their distribution, and the lifestyle characteristics of the households could influence the access redundancy indicator. Proximity and rapidity indicators also show the presence of some spatial clusters, which suggests TAZs would have similar access indicators to their neighboring TAZs. Examining the three access indicators together, however, shows little overlap among the clusters for the three indicators. This result shows the spatial heterogeneity of TAZs in terms of their three access indicators. Figure 3d shows the aggregated map and clusters of TAZs based on the three indicators. In this map, the high and low clusters of the access indicators are used for characterization of access during normal times. The most frequently occurring categories are those with a low number of visits, low duration, and a low distance followed by those areas with a high number of visits and low duration and distance. Both of these clusters indicate good access. These categories with proper rapidity and proximity access are more concentrated in high-income areas west and southwest of downtown. The large TAZs in the north are the next most frequently occurring clusters with high levels of duration and distance and a low level of number of visits, which shows poor access in terms of proximity, rapidity, and redundancy. The association between the sociodemographic characteristics of the TAZs, as well as the facility distribution characteristics with the access indicators, were examined to evaluate factors influencing spatial variations of access patterns. The results show that the identified clusters have distinguishable demographic and facility distribution characteristics. Table 3 shows the results of comparing the sociodemographic and facility distribution characteristics related to all access indicators for different clusters. ANOVA and chi-square tests were implemented to test if differences between the clusters are statistically significant at a 0.05 confidence level. The results for the number of visited POIs indicator show that the cluster 1, which the highest number of unique stores visited , had a lower socioeconomic status. These groups of people have diminished access to big supermarkets and are more likely to buy their needed supplies by using coupons from different stores to save money. In addition, cluster 1 has more stores in the TAZs and also a lower chance of being located in a food desert compared to other clusters. The availability of stores also partly explains a greater number of unique grocery stores visited by residents of TAZs in this cluster. The results related to the proximity and rapidity access dimensions do not show a clear disparity with respect to the sociodemographic characteristics. In fact, the results show that TAZs with a better socioeconomic status have lower proximity and rapidity access dimensions. This pattern could also be due to the fact that these TAZs are www.nature.com/scientificreports/ more likely located in residential areas, and their residents have a greater capability to commute longer distances to obtain their needed supplies from specific stores. In addition, these clusters are located in areas with a lower number of stores and a higher chance of being located in a food desert. Preparation. The percentage change in access indicators in comparison with the defined baselines was examined to assess the disparities in access to grocery stores in the preparation phase. In the time span between the identification of Hurricane Harvey until landfall in Harris County, residents attempted to store supplies, such as food and bottled water, to protect their households against hardship. Since there was limited mandatory evacuation by public officials, most residents sheltered in place; thus, storage of food and bottled water was critical to riding out the storm. Many grocery supplies were out of stock due to the high demand and the shelterin-place protective behavior of the residents, which disturbed the access patterns to grocery stores. We analyzed the percentage change in the access indicators to seek out disparities in access during the preparation phase. The percentage change of the access indicators on the day before the hurricane landfall was chosen to examine disparities in access patterns. The examination of the daily patterns of access indicators showed that their greatest percentage change occurred on this date, and residents showed a high level of activity to obtain grocery supplies for their households. To explain the variations in the access indicators during the preparation period, we first examined the patterns in the pre-disaster level for the three indicators together with income level. The examination of the association of these two factors with the percentage change in the indicators reveals that there is a statistically significant relationship between the access patterns in the normal condition with the percentage change of these indicators during preparation stage. Results show that those with lower accessibility based on the three indicators, meaning fewer unique POIs visited, longer duration, and longer distance, had a higher percentage change during disaster when compared to normal conditions. This means that those who visited few number of POIs greatly increased the number of unique visits. A similar pattern exists in the rapidity dimension, as the TAZs with a longer trip duration in the normal condition showed a lower increase in the percentage change. The increase in the trip distance also shows that those TAZs which have shorter distance in the normal condition have a greater increase in the preparation period. Within the low, median, and high levels of accessibility in the normal condition, the low income and high income TAZs were compared to their new levels of accessibility in the preparation stage. The results show disparate access to grocery stores during the preparation phase of Hurricane Harvey. Figure 4 shows the boxplots of the percentage changes in the access indicators. ANOVA and Tukey's tests show that there is a significant difference in the percentage change of the access indicators across income groups at 0.05 confidence level. Regarding redundancy indicator, results show that the high-income group had a significantly higher increase in visits to unique stores compared to lower-income groups. This result indicates that high-income groups were able to improve their redundancy by visiting more unique stores during the preparedness stage to supply their households with adequate groceries before the hurricane made landfall in Harris County. Regarding the rapidity indicator, lower-income groups who already had long duration trips in the normal condition had disproportionate increase to their trip durations during the preparation period compared to high income TAZs. Regarding the proximity indicator, high-income groups who initially had a low distance to grocery stores during the normal condition had the greatest increase in their distance during the preparation stage. This shows the capability of high-income groups to commute further to obtain their supplies. However, the increase in the percentage change of the distance is not significantly different across the income groups for TAZs with low proximity in the normal condition. In the next step, we examined the effect of facility distribution factors, namely the number of POIs in a TAZ, on the access indicators. First, the correlation between the numbers of POIs in a TAZ with the access indicators shows that there is only a slight but significant association between the percentage increase in redundancy, or number of unique visits, and the number of POIs. TAZs with more POIs do not show a large increase in visits as they already provide more options for the residents. However, the associations between the number of POIs and the rapidity and proximity metrics are not significant. The results indicate that change in access indicators during preparedness stage is not associated with the number of POIs. This result shows that, during preparation stage and due to surge in demand for grocery supplies, residents need to take longer and further commutes to POIs outside their TAZs to obtain supplies regardless of the number of grocery stores in their own proximity. Thus, those residents who could not increase their access distance may not be able to adequately prepare for the impending hazard. In the next step, we categorized TAZs into four categories based on income in conjunction with location in a food desert to understand the effect of these factors on access indicators. Figure 5 shows boxplots of the percentage change in the access indicators; the ANOVA test suggests a significant difference in access across the four categories. The results show an interaction between income and food deserts with access to grocery stores. High-income TAZs located in a food desert show a higher number of visited POIs, while the duration and distance are not significantly higher than that of low-income TAZs in a food desert. Within food deserts, these high-income TAZs were able to visit more stores to gain their grocery store needs without having to increase their trip distance and duration. Thus, being located in a food desert, while affecting people's access to the grocery stores to some extent, does not have an equal impact on the access of different income groups. Meanwhile, those high-income TAZs not in food deserts were also able to access farther POIs which means high income were able to access grocery in and outside their close proximity. Integrating the results of food desert status with the sociodemographic information, the access inequalities further reveal themselves. Also, this result shows that being located in a food desert is not adequate to evaluate residents' access to grocery stores during the preparation stage of disasters; access to grocery stores during preparedness stage is more influenced by the dynamics of human protective actions influenced by capabilities. Impact/short-term recovery. To understand the dynamic spatiotemporal patterns of impact and shortterm recovery of access to grocery stores, we conducted multivariable temporal clustering on the three access indicators. The results indicate the presence of two clusters of access patterns to grocery stores in TAZs whose access was affected by flooding. Figure 6 shows the dynamic pattern of access indicators for the identified clusters. TAZs in cluster 2, which are depicted in blue in Fig. 6a, have better access and show a pattern that has a greater increase in the visited POIs indicator ; however, the cluster shows a lesser increase for the distance and duration indicators during the recovery period. In particular, the proximity indicator seems to show a lesser level of decrease in cluster 2 compared to cluster 1. The patterns suggest that while cluster 2 visited more stores, their duration and distance did not increase significantly compared to cluster 1. This result shows that cluster 2 TAZs had better access, as they could meet their needs by visiting more stores without the need to significantly increase their trip distance. Increased redundancy without the need for decreasing proximity is an indication of better access for cluster 2 . Figure 6d shows the maps of the identified clusters; this map suggests that the TAZs in clusters form spatial cliques. These cliques show a distinct pattern while being in proximity of the cliques of other access clusters. To further examine factors affecting access patterns in each cluster, we examined the sociodemographic characteristics and the facility distribution characteristics, such as number of stores and location in a food desert. Figure 7 shows the comparison of the sociodemographic characteristics and the facility distribution in the two clusters. These boxplots are the results of the comparisons among the TAZs in each cluster . The results suggest that the TAZs in cluster 2 with better access have higher income and a lower percentage of the minorities. This result indicates that in the aftermath of disasters, the socially vulnerable population has lesser access to grocery stores. In addition, those TAZs in cluster 2 had a higher chance of being in a food desert and have a lower number of POIs in their TAZ, they have better access in terms of improved access redundancy without decreased access proximity. The distribution of income level within a food desert in the two identified clusters was examined in a proportion test analysis. The results of the proportion test show that in cluster 2, the proportion of high-income TAZs in food deserts is significantly higher than that of low-income TAZs in food deserts with proportions being 0.53 and 0.25, respectively . However, such a distribution did not exist in cluster 1, with proportions being 0.57 and 0.47 for the high-income and low-income TAZs, respectively . Thus, the high proportion of higher-income areas in food deserts allowed for better access in cluster 1 than in cluster 2 despite a higher level of being in a food desert. These results suggest that the food desert designation is not adequate to capture residents' access to grocery stores in the aftermath of disasters, and more data-driven insights are needed to evaluate the access of different sociodemographic groups to critical facilities in times of disaster. Furthermore, the results from the comparison of the extent of flooding in the TAZs do not show a significant difference between the two clusters . The insignificant difference in flooding shows that the diminishment of access to grocery stores goes beyond the extent of flooding; however, the sociodemographic characteristics and the facility distribution characteristics explain the disparate access to grocery stores for the two groups. --- Discussion and concluding remarks In this study, we analyzed high-resolution location-based data to characterize and uncover the disparities in access to grocery stores before, during, and after disaster events. We developed and implemented access indicators based on the population-facility network structure and analyzed their spatial and dynamic patterns to characterize access to grocery stores and reveal the presence of disparities at different stages of a disaster. The high-resolution location-based data enabled capturing the patterns of trips of residents to grocery stores and facilitated the characterization of access based on observational data. The current approaches in examining access based on physical and distance-based metrics fail to provide adequate and reliable information about the properties of access during a disaster to enable decision-making for equitable access. The key findings of the research show the importance of characterizing access based on the three dimensions of redundancy, proximity, and rapidity since there are unique differences to the normal conditions to the disaster setting. During the normal time, areas of lower socioeconomic status have a comparatively higher redundancy and slightly lower levels of proximity and rapidity compared those with higher socioeconomic status. A possible explanation is that lower socio-economic groups tend to have lesser access to the large supermarkets that provide a large variety groceries and other supplies, making them more dependent on visiting multiple stores. This sub-population is more dependent upon public transportation and less capable of making longer commutes. In addition, a combined examined of dynamic patterns of access indicators revealed that the disaster disproportionately exacerbated access disruptions for socially vulnerable groups, in the context of the 2017 Hurricane Harvey in Harris County, Texas. In the preparation period prior to the disaster, areas of higher income had a greater increase to the number of unique visits and the distance traveled . This reveals the higher capability to take protective actions and seek essential resources for preparation in comparison to lowerincome areas. This phenomenon changes in the impact/ short recovery stages of the disaster setting. Areas with higher income and a lower percentage of racial minorities maintained their higher redundancy, meaning they were able to access more stores similar to the preparation periods. In this case, though, these areas had a lesser decrease in distance and less trip duration compared to those of lower income and larger minority population. Thus, these groups could visit more stores without longer and more lengthy commutes. The examination of different dimensions of access in the context of disasters confirmed the inadequacy of physical measures of access, such as location in a food desert and the number of available stores in an area, for understanding access in disasters. The research findings show that while these factors are better capable of explaining the variations in access during normal times, they fail to explain access characteristics during different stages of a disaster. The multivariate analysis of access showed that the areas with higher income and a lower percentage of racial minorities have better access to grocery stores. However, higher income areas tended to have a lower number of stores in their area and a higher likelihood of being in food deserts. This could be because higher income areas live in residential areas, where they are more separated from grocery stores; however, they have the capability to travel further distances and visit more stores. These findings indicate that, while focusing on facility distribution characteristics is useful in understanding the disparities during the normal condition and for purposes of urban development, the standard physical measures of access, such as location in a food desert and the number of available stores in an area, is not sufficient for understanding access to facilities in the context of disasters. a-c) show the unequal level of income, percentage of minorities, and percentage of elderly in the two clusters: the proportion of being in a food desert is higher in cluster 2 , and there are more grocery stores in the red cluster . Lastly, shows that the level of flooding is not statistically different among the two groups. --- Data availability The data that support the findings of this study are available from SafeGraph and Streetlight Data, but restrictions apply to the availability of these data, which were used under license for the current study. The data can be accessed upon request submitted on SafeGraph and Streetlight Data. Other data we use in this study are all publicly available. --- Code availability The code that supports the findings of this study is available from the corresponding author upon request. Received: 27 April 2022; Accepted: 31 October 2022 --- --- Competing interests The authors declare no competing interests.
Natural hazards cause disruptions in access to critical facilities, such as grocery stores, impeding residents' ability to prepare for and cope with hardships during the disaster and recovery; however, disrupted access to critical facilities is not equal for all residents of a community. In this study, we examine disparate access to grocery stores in the context of the 2017 Hurricane Harvey in Harris County, Texas. We utilized high-resolution location-based datasets in implementing spatial network analysis and dynamic clustering techniques to uncover the overall disparate access to grocery stores for socially vulnerable populations during different phases of the disaster. Three access indicators are examined using network-centric measures: number of unique stores visited, average trip time to stores, and average distance to stores. These access indicators help us capture three dimensions of access: redundancy, rapidity, and proximity. The findings show the insufficiency of focusing merely on the distributional factors, such as location in a food desert and number of facilities, to capture the disparities in access, especially during the preparation and impact/short-term recovery periods. Furthermore, the characterization of access by considering combinations of access indicators reveals that flooding disproportionally affects socially vulnerable populations. High-income areas have better access during the preparation period as they are able to visit a greater number of stores and commute farther distances to obtain supplies. The conclusions of this study have important implications for urban development (facility distribution), emergency management, and resource allocation by identifying areas most vulnerable to disproportionate access impacts using more equity-focused and data-driven approaches. Natural hazards can disrupt access to healthcare, pharmacies, and grocery stores. Households with easier access to these critical facilities could achieve a higher level of preparation and short-term recovery effort 1,2 , and thus, be more capable of withstanding the adverse impact of the disaster 3-5 . In particular, better access to grocery stores is critical for access to food and water during and in the aftermath of disasters. Therefore, disrupted access to grocery stores could result in adverse effects on the well-being of residents. In non-disruptive periods, not all residents enjoy equal levels of access to grocery stores which result in disparate impacts on the well-being of residents in their day-to-day lives. For instance, areas with a dearth of retail establishments selling nutritious food, known as food deserts, are endemic to lower socioeconomic neighborhoods. However, these measures of food inaccessibility are rather limited in examining disparate impacts during disasters when compared to normal conditions 6 . Sub-populations already facing access inequality could be further exacerbated during disasters due to disruptions in road networks 7,8 , as well as residents' capabilities and lifestyle patterns 9,10 . Certain vulnerable areas face a greater impact from road closures and damage to the stores, which may disproportionally disrupt the access to the grocery stores [11][12][13][14] . To add on, people of a lower socioeconomic status usually have fewer resources and capabilities to find alternatives to food and water to compensate for such disrupted access [15][16][17][18] . Indeed, the disaster setting can cultivate a supply-demand imbalance among different sub-populations due to protective actions of residents 3 and disruptions in infrastructure and supply chains 19 . In preparation for an impending disaster, people who choose to shelter-in-place tend to stockpile supplies in anticipation of several days of disruptions with the adverse effect of depleting grocery store inventories 20 . This surge in demand for supplies leads grocery stores to run out of stock and force people to visit multiple stores at farther distances. When faced with local shortages of supplies, households with personal vehicles have the means to travel further
Introduction The health of South Africans is an issue of great significance to politicians, those involved in health promotion efforts, as well as to ordinary citizens. South African health research often tends to focus on illness and how best to manage national health problems such as HIV, TB, cardiovascular disease, obesity , and most recently, the Coronavirus pandemic. Less research has focused on how South Africans make sense of the concept of health and the discourses they draw on to shape how 'healthiness' is understood and experienced. These topics have implications for a range of health-related endeavours including health promoting interventions and the communication of health information. Discourses of health also play a significant role in the constitution of identity. This has both personal and political implications. First, these discourses shape how individuals may feel about themselves and others, and in turn, influence how they behave. Second, the reproduction of specific health discourses may, either purposefully or inadvertently, support or undermine broader societal structures. The broader project on which this paper is based, sought to answer the questions: how do young South African adults discursively construct health and what implications do these constructions have for the constitution of identity? This paper explores one of the dominant groups of discourses which were drawn on by participants when making sense of what health means. By unpacking and interrogating underlying assumptions about what it means to be healthy, and why it is important, this paper aims to contribute to existing public health and health promotion literature by providing a critical perspective on the impacts of health discourse both personally for individuals, and more broadly in terms of how dominant discourses support or are upheld by existing social structures. This research is intended to encourage critical reflection on how health is implicitly constructed within public health research and health promotion endeavours. --- Theoretical framing In this article, discourses of balance and health will be discussed and the identity implications they have for individual subjects will be explored. We will be adopting a view of the self as constantly constructed and reconstructed within the boundaries of social norms, rather than as pre-existing and fixed. Burr highlights the role of social interactions with others in the identity construction process, especially the importance of linguistic interactions or conversations. Foucault's notions of 'subjectification', 'technologies of discipline' and 'technologies of the self' will be outlined below, as these concepts were used to frame the authors' understanding of how discourses play a role in the construction of the self as well as how these processes of self-constitution are influenced by power. Foucault describes three modes by which individuals become subjects, one of which seems particularly relevant for this analysis . Subjectification refers to the active process of forming oneself into a certain kind of subject. This is done through constituting the subject, recognising the self as a subject, and relating to the self as such. This paper aims to draw on this understanding of the constitution of subjects to explore the various ways in which individuals take up dominant discourses to construct for themselves certain subject positions. Another aspect of Foucault's work which is relevant to this article is his notion of 'technologies'. At different points in Foucault's work, he refers to different 'technologies', such as technologies of power, technologies of government and technologies of the self. Usually these technologies allow for the production of things, meanings, behaviours, or practices. Two of these technologies are of particular relevance to this paper: technologies of power and technologies of the self. In Discipline and Punish , Foucault discusses disciplinary techniques, an example of technologies of power, and examines the way that discipline, as a form of self-regulation, is encouraged by institutions and permeates modern societies. He discusses how individuals internalise pressures from external institutions and then sustain the power relation by regulating their behaviour in order to conform to societal norms. These disciplinary pressures function in a way that makes it unnecessary for institutions to coerce individuals into conforming, as these systems of control are internalised and enacted on the self. The individual thus plays both the role of the oppressor and the oppressed. These disciplinary practices are experienced as natural and originating from within the self rather than as being externally imposed . This process is not linear, so individuals are not passive recipients of cultural, social and economic norms. Instead, they interact with these ideas and adopt certain practices while rejecting others . Technologies of the self also involve the production of certain practices but instead of being centred around avoiding punishment, these practices tend to be more aspirational, aimed at producing more ideal subjects. Foucault defines technologies of the self as those which, 'permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves to attain a certain state of happiness, purity, wisdom, perfection, or immortality' . For example, a person's decision to purchase organic foods could be viewed as an effort to work on and improve the self in an attempt to embody perfect health. Certain discourses construct ideal ways of being, and set out the guidelines or rules to abide by to achieve these ideals . By drawing on these concepts, this paper is able to highlight connections between discourse, social norms and ideals, and the construction of subjectivities. In addition, it uses these theoretical framings to explore how individuals play an active role in constituting their own identities and in reproducing and resisting different discourses. --- Background literature This section discusses a few key empirical studies which have explored the construction of identity through health-related practices and discourses. These studies were selected based on their particular focus on health improvement practices, specifically in relation to balance, and the construction of identity. The authors selected a limited number of studies where the findings were particularly relevant to those in the current paper, allowing them to be described in more detail. Previous empirical research relating to healthiness and the construction of identity has often focused on the performative aspects of food consumption. Performativity involves the repeated actions and rituals which have come to represent certain kinds of selves . As Manton and Poole argue, food consumption enables individuals to demonstrate their membership to specific groups or evidence certain personality traits and, similarly, as Barthes ) and Bailey point out, this also enables individuals to distance themselves from groups with which they do not wish to be identified. Cairns and Johnston explore the process of constituting ideal identities through healthy food consumption among middle class women exposed to the contradictory pressures of neoliberal capitalist societies. Neoliberal subjects are supposed to exhibit characteristics such as self-control and discipline while at the same time express their freedom through consumption . Cairns and Johnston argue that the women in their study navigated this tension through the use of what they term the 'do diet'. This refers to the practice of consuming primarily healthy food options. This way, they were able to engage in what is presented as empowering consumption while at the same time demonstrating their self-control by carefully selecting foods believed to enhance one's health and avoiding those which are seen to be health-harming . This practice of healthy food consumption, however, required constant and spontaneous 'calibration' in order to avoid extremes of either consumption or self-control . If the individual engaged in too much consumption or was too self-disciplined with their food consumption choices, this could have negative identity implications and could place them in categories such as: self-indulgent, ignorant or fat, or a health fanatic, rigid or obsessive. 'The do-diet celebrates healthy food choices, while emphasizing the need for continual bodily discipline, allowing the seemingly contradictory neoliberal logics of continual consumption and corporeal control to co-exist' . Similarly, Luna in her research with American runners, found that participants constructed a 'narrative of balance' where they drew on ideas about the hard work required to maintain their physical fitness while at the same time emphasizing the 'ease of hard work' suggesting that this disciplined pursuit of physical fitness came easily to them or was fun. This meant that they were able to avoid negative 'health fanatic' associations while still constructing their associated privilege as deserved. Less research is focused on identity and health in a broader sense , however, Robertson's article discussing men's lay understanding of health more generally also noted an emphasis on a balanced approach to health which avoided various 'excesses'. He explains how gender plays an important role in how men talk about their health. To construct themselves as masculine, the men in his study were obliged to assign a low degree of importance to their health. This is related, in some ways, to the gendered social pressure placed on men to be 'risk takers' in order to constitute themselves as masculine. Robertson refers to this as the 'don't care/should care' dichotomy where men are obliged to take care of their health in accordance with the expectation to be good citizens, however, their hegemonic masculine identity is put at risk if they appear to be too invested in their health. --- Methods This study aimed to explore the discourses young South African adults from urban areas used to construct 'health' and the implications these discourses had for identity and broader structures. Data were collected through in-depth, semi-structured interviews with 20 South African young adults who ranged in age from 18 to 35. Participants were recruited using purposive and snowball sampling. This research was interested in exploring the concept of health and the discourses which are used to shape how healthiness is understood and experienced and thus individuals who had an interest in personal health improvement were approached. This involved contacting fitness centres , and those involved in other areas of health improvement such as medical students and a wellness blogger. From these participants, snowball sampling was used to provide access to additional participants who were involved in various kinds of health improvement practices . The sample consisted of participants who identified themselves as either 'white', 'coloured', 'indian' and 'black'. The majority of the participants identified as 'white' and would be considered to be from middle-class social groupings. All participants were from urban areas in South Africa . All had received at least a secondary school education and many had, or were currently pursuing, tertiary qualifications. In addition, all the participants sampled described themselves as 'healthy' at the time of the interview, which shaped the kinds of health-related topics they identified as most pressing. This sample featured individuals who had access to certain privileges in relation to the majority of the South African population , however, a portion of the sample were currently living in low-income communities and did not necessarily have access to the same health improvement resources as those in higher-income settings. While researching individuals from marginalised groups is essential to adequately understand social problems, studying those who appear to be benefitting in some ways from the status quo is also important, as it allows an exploration of how systems of inequality are reproduced and why discourses which function to perpetuate these inequalities may seem appealing. This group of participants provided insight into how those with some resources and those who were more financially secure constructed the 'healthy self'. While the participants share some similarities, the group is too diverse to be representative of any specific subsection of the population. In general, discourse analytic research is less interested in producing generalizable findings. As Fiske explains, 'no utterance is representative of other utterances, though of course it shares structural features with them; a discourse analyst studies utterances in order to understand how the potential of the linguistic system can be activated when it intersects at its moment of use with a social system ' . What this paper aims to provide, therefore, is an exploration of some of the processes through which individuals can make use of available discourses to construct identity and to reproduce/resist broader social ideals. All interviews were conducted in English by the first author between 2015 and 2017 and took place in locations of the participants' choosing, including coffee shops, university classrooms, fitness centres and the researcher or participants' homes. A social constructionist theoretical approach was taken and the transcribed interviews were analysed using a Foucauldian method of discourse analysis based on Willig's 6 step guidelines. A Foucauldian discourse analytic approach was selected as this method focuses on an exploration of the links between discourse, social structures and subjectivity. By examining the discourses used to construct health, and the broader social systems which uphold or are reproduced by these discourses, we are better able to make sense of the consequences of the health messages we take for granted. It is important to examine the intersections between discourse, social structures and subjectivity to develop strategies to communicate about health in ways that are less likely to have negative, unintended consequences and also to facilitate the exploration of innovative approaches to challenging health inequities through discursive choices. Willig's approach to discourse analysis involves identifying the discursive object to be discussed, which in the case of this study was 'health'. All mentions, or notable omissions, of the discursive object are then considered in relation to how they fit into wider discourses present in society. The ways in which they function to allow for certain subject positions to be taken up, certain subjective experiences and certain opportunities for action, are also explored. This article will focus on one of the groups of discourses identified through the analysis: 'health as balance'. This paper will discuss how these discourses were used by participants to construct health and the implications this had for the constitution of identity. The use of 'health as balance' discourses are considered in relation to how they serve participants, as well as how they can simultaneously be problematic. 'Health as balance' discourses are also situated within broader discourses in order to show how their use interacts with political and economic structures beyond the individual. The intention is to provide a nuanced critique of how health is constructed by these participants, to better understand why certain ways of framing health are appealing, and how they both resist and are rooted in broader systems of inequality. Ethical clearance for conducting the study was obtained from the Rhodes University Institutional Review Board. Informed consent was obtained from each participant and all identifying factors have been changed in the discussion to ensure confidentiality. --- Results An underlying idealization of the notion of balance in relation to health was prevalent throughout the data. Participants were not asked directly about balance unless they first raised the idea. 'Health as balance' discourses were most commonly used in response to questions including "What do you think health means?" and "How do you try to stay healthy?" 'Health as balance' discourses were used to construct health in two main ways. First, health was constructed as a careful moderation of behaviours and emotions to avoid either of the health extremes and maintain one's amiability . Second, health was constructed as achieving a balance between all facets of one's life. Emotional, mental, spiritual and physical aspects all needed to be given sufficient attention and work . In other words, healthiness was constructed as involving a holistic approach to self-improvement. These two 'health as balance' discourses will be discussed below. --- Health as moderation Through the 'Health as moderation' discourse, health was constructed as the effective management of one's behaviour and emotions to avoid what were perceived to be excesses or extremes and to maintain amiability. Participants drew on this discourse to construct two undesirable health extremes and then to construct a health ideal that was situated between these extremes. This 'middle ground' was viewed as socially and morally preferable. --- Too healthy The first constructed extreme was the 'health fanatic': someone overly concerned with health and associated practices. The quotes below illustrate some of the problematic aspects of this kind of subject. April: I got to a point where I was like panicked 'cause I suddenly had to eat food that I didn't know what was in it and that's when I realised I've become so healthy that I can't live anymore, like I can't live in the real world. Richard: …the trend is ja the dominant one is that guys who are in top shape or girls who are in top shape are jerks ja, that's the trend. George: I do have a friend… You look at her, she's very healthy, she does look like somebody who takes very good care of herself. But I noticed that when we were in the train. At one point, she actually took a picture of someone who was behind me… and she sent it via whatsapp to me like: "look at that person who's behind you" and behind me was a lady who was actually, she had, you know she was a bit overweight-chubby. So that kind of gave me an impression that she's very healthy and stuff but see now, it's starting to have a negative impact on other people, you know? And it's also starting to change the way she looks at other people…. So I think being too healthy, it can come with some problems as well. The above quotes address the social component to health practices. George points out that people could be put off by his 'very healthy' friend's critical attitude towards others and Richard expresses an observation that, in general, men and women who are very fit are often 'jerks'. The expectation to maintain a certain level of healthiness and to bear the signifiers of this health exists alongside the social expectation to be sociable, easygoing, to participate in social customs and, 'to live in the real world'. These two expectations sometimes contradict one another and the restrictions or disciplinary techniques required to maintain an acceptable level of health, may prohibit individuals from engaging fully in certain social events, for example those where the consumption of 'unhealthy' foods is expected. Maggie describes individuals like this who are 'like "oh no that's not healthy I can't eat that"' as being 'very unapproachable'. Through 'health as moderation' discourses, participants constructed this undesirable super healthy 'health fanatic' who was obsessive, rigid, judgemental, and antisocial. --- Too unhealthy The second undesirable health extreme was the subject who was too unhealthy. Participants were seldom highly critical of people with health problems but would often emphasise the importance of working towards improved health. In a more extreme case, people who were too unhealthy were described as 'inherently lazy'. Not being healthy enough was also described as leading to a range of unpleasant experiences including feeling physically unwell, contracting diseases, emotional pain and low self-esteem, or living in a way that felt inadequate. Maggie: with unhealthiness brings it brings sadness, it brings this kind of like guilt and loathing, self-loathing, which added on to the fact that for example, will add on like loss. Like it's not good for you. Like loss and self-loathing-not a good combination at all. Maggie describes some of the consequences if one's health and body 'aren't in balance'. She talks about physical sickness and the impact on one's self-concept. She describes feelings of 'self-loathing' and 'guilt'. These kinds of emotions are usually the result of being ashamed of one's behaviour, or of who one is. Maggie's association between these kinds of emotions and unhealthy behaviours or states, functions to moralise health. Within this construction poor health is not only a physical problem it also implies that someone has done something wrong and shameful. This idea, that healthiness is essential for self-acceptance, means that experiencing a sense of self-acceptance becomes temporary at best and at worst, impossible. We see here how social norms around acceptable health behaviours are internalised and then experienced as originating from within the self. External judgement is unnecessary as the subject disciplines herself and regulates her own behaviour by evaluating herself and experiencing emotional distress if her behaviours are not in accordance with accepted ideals. --- Just right In opposition to these two undesirable health extremes, participants constructed an ideal healthy subject situated between these. To access this ideally healthy subject position, participants were required to adopt technologies of the self to successfully resist the pressure to be 'perfectly' healthy and the temptation to become too lax with one's health enhancing practices. In the following interaction, Maggie illustrates how she strikes the balance between the healthy and unhealthy extremes. She expresses a disapproval of an obsessive attitude towards health and favours one which is more light-hearted and carefree. However, she also suggests that a failure to live up to health standards does need to be addressed and corrected. Maggie: I think, well, in my group of friends we eat healthy, and we'll have the occasional cake and cupcake … which is great and that's good 'cause you can't 'live'. But then there are the other friends who, when they eat something bad then they, "Oh I shouldn't have eaten that, I feel bad, I feel guilty, it's bad for my body" you know like that kind of thing. I have one friend…who because she's eaten like that meal or that bad meal she won't eat anything after that for the whole day or so. Really bad to skip meals at all in my opinion. I have others who are like "ah I shouldn't have eaten that oh well" you know what I mean? That's like the one friend that I have. --- Interviewer: Like the damage is done? Maggie: Exactly. Well not so 'wow' but like it's already done so it's in the past -exactly. So I'm not like, going to starve myself for the next four hours because of this one cheat you know. I'll go to gym tomorrow and try work it off -it's kind of like a balance. In the beginning of this interaction Maggie describes eating something like a cupcake which is generally not considered a health promoting food as 'great' and 'good' because enjoying food like that is part of 'living'. This suggests that too intense a focus on healthiness and perfect health rule following, is not ideal as it is restrictive and boring. She also resists the interviewer's conceptualisation of eating a cake or cupcake as 'damage' thereby carefully avoiding the condemnation of occasionally indulging in so-called unhealthy foods. She then also critiques the corrective behaviour of her friend who 'skips meals' if she has eaten something 'unhealthy' as she argues that such behaviour is 'very bad'. So it seems that unhealthy behaviours when engaged in for the sake of 'living' and enjoyment are acceptable, but when they are engaged in as a response to feelings of guilt they become unhealthy. Despite Maggie's criticism of obsessive corrective health behaviours in response to simply enjoying oneself, she does describe a behaviour like eating a cupcake as a 'cheat'. This is a very common term used in diet culture and illustrates the negative connotation given to the consumption of 'unhealthy', especially high calorie, foods which could lead to weight gain. The word cheat also suggests a purposefulness in relation to this behaviour. In this context, eating a cupcake is not a momentary lapse in self-discipline or a failure to comply with health standards. Instead it is a choice to 'live'. We see here how discourses legitimating the value of self-control are upheld even within 'health as balance' discourses despite participants' intentions to resist the restrictiveness of these discourses. She then goes on to say that she would not do anything as radical and unhealthy as 'starve myself' as a result of the consumption of the illicit food, but she would need to go to the gym the next day and 'work it off'. So although she describes the incident of eating the cupcake as 'in the past' it stays with her for at least a day until she has corrected for and undone her 'cheat'. Through the 'health as moderation' discourse, certain practices are constructed as enabling the production of an ideally healthy, balanced self. In this example these technologies of the self involve a limited and controlled indulgence in behaviours which would generally be categorised as 'unhealthy' while also displaying the ability to engage in health enhancing practices such as exercising and limiting 'negative' emotional reactions such as anxiety or guilt. Through the use of these discourses Maggie can construct for herself a subject position of managed contradictions: the free, happy, self-confident self who can enjoy her life without restriction; the selfcontrolled, disciplined self who makes rational choices and takes responsibility for her decisions; and the emotionally restrained self, who will not overreact to a small mistake but will instead behave calmly and reasonably. However, as a result of these competing discourses, a situation arises where one is prohibited from reacting emotionally to the experience of living with tremendous pressure to become a self who embodies seemingly inconsistent traits. The following section explores an alternate way in which the notion of balance was used to structure understandings of health. --- Health as holistic This discourse constructs healthiness as carefully maintaining a balance between all the facets of one's life. Here it was emphasised that physical health should not be prioritized at the expense of mental health and that it is important to make an effort to maintain one's wellbeing more broadly, rather than focusing exclusively on one component of health. Adele: Health means your overall wellbeing… not just your physical…I think to be healthy is a holistic thing so you can be physically healthy but it doesn't mean that you're mentally or spiritually healthy… So it's a daily process I think, keeping yourself healthy. Participants' phrases such as: 'health is a balance in your approach to nutrition and exercise as well as emotional aspects in your life', 'Health means your overall wellbeing not just your physical but also possibly your mental and spiritual' and 'it's a daily process' illustrate how all aspects of one's life should be carefully, constantly and 'holistically' managed to ensure a balance is achieved. When viewed this way, discourses of balance become a different manifestation of discourses that promote self-control. These discourses insist that the individual is exclusively responsible for monitoring their body and their behaviour and effectively disciplining themselves to ensure that they are abiding by the requirements expected of 'healthy citizens'. This, and the idea that it is necessary to maintain perfect self-control in order to be healthy, was often exactly what participants were challenging when drawing on 'health as balance' discourses. However, the powerful moral value of discourses valorising self-control, made it difficult for participants to completely reject these notions. Within 'health as holistic' discourses, we see their more subtle reproduction. Some of the difficulties experienced as a result of this internalisation of the responsibility for health within 'health as holistic' discourses are discussed below. --- George: Well hectic schedules for starters like uh being a student and all. Like for instance, there's this…assignment that I have to do…and I also have to write a review… and I also have to study a reading for an assignment tomorrow, and study for a test, and I have to do this [place] thing as well. So that'll take up most of my day so I have to do most of what I'd do now in the night. So ja you can imagine that and trying to squeeze it all in between eating healthy and exercising and having a meaningful conversation with a classmate or a friend or a call. So ja it's trying to find a balance... ja, I think that's what makes it difficult, that's the difficult part, balancing things out. Many of the participants made use of 'health as balance' discourses to invoke a sense of flexibility and ease. In the above quote, however, we see the participant struggling with the dedication and commitment required to live a balanced, healthy lifestyle. In this way trying to achieve a balance between all the necessary aspects of health, in line with 'health as holistic' discourses may lead to stress and anxiety. However, stress is often considered harmful to health, as shown by Amelia's quote below, and is yet another thing that should be managed in order to achieve balance. Amelia: I think stress is very unhealthy, it upsets your balance a lot. If the pursuit of a balanced, healthy lifestyle is experienced as difficult and sometimes stressful then an inescapable cycle of 'anxiety and control' is likely to occur. The responsibility for ideal health is internalized and when this becomes challenging or impossible, a sense of anxiety may result. Because stress is viewed as a threat to one's health the responsibility for managing this is also internalized and so the cycle continues. Adele: I'm not very good at it actually [managing stress], but I would say that probably exercising is the best way for me to control it, and just maybe socialising with friends, getting out of certain environments that maybe are stressful. In the above quote we see Adele internalising the responsibility for managing her stress through techniques of discipline, for example, 'exercising'. However, Adele also says that, 'exercise is important but sometimes that creates more stress because you creating time to go to the gym that you don't have …'. This illustrates the cycle of 'anxiety and control ' which Crawford predicts: the pressure placed on individuals to live a holistically, healthy life and how unattainable this goal has become. In other words, through participants' attempts to resist and dismantle the unrealistic ideal of perfect healthiness, balance has been elevated to an ideal that appears to be equally unattainable. --- Discussion Balance was one of the most frequently referred to ideas across all of the interviews. Balance was seen both as an ideal to aspire towards, and itself a signifier of health. John Stuart Mill's states that, 'there seems to be something singularly captivating in the word balance, as if, because anything is called a balance, it must, for that reason, be necessarily good'. Phillips discusses the idealisation of balance and how the notion of balance is used in a range of contexts to reassuringly refer to someone or something, creating a sense of order. Being, having or using balance, according to Phillips , is almost always used to denote a positive state. One notable exception he cites is in art, where balance is not uncritically desired. The moral security of balance may be related to the kind of political and social views which are seen to be unbalanced-often the 'isms' . The notion of balance Phillips describes, which is most relevant to the current research, is the understanding of balance as the avoidance of excess. Also relevant is the often mutual exclusivity of balance and passion. Phillips notes how challenging it is to maintain a sense of balance in relation to the things we really care about. The participants in this research constantly reiterated the importance and value of health while insisting that the avenue through which one should achieve a healthy ideal, was by ensuring one's approach was always balanced. The effect of this tension between passion and balance meant that participants needed to ensure that their emotional reactions were always carefully managed. There were certain emotions which were discursively constructed as undesirable and to be avoided, especially anxiety, stress, guilt or shame. This reluctance to express these kinds of 'negative' emotions may be related to the challenge of maintaining a balanced approach to health when one is experiencing strong emotions. This is illustrated by the growing prevalence of 'keep calm' culture where individuals are expected to moderate their emotions and avoid panic, anger and stress for the sake of their health. This is associated with the popularity of mindfulness and meditation practices and applications aimed at assisting individuals with managing their stress . Gill and Orgad's conceptualisation of 'confidence cult' is also relevant to this discussion. They argue that, in a post-feminist context, insecurity is seen to be 'problematic, indeed toxic' and a lack of selfconfidence has come to be viewed as unattractive. Expressing emotions that indicate some sort of dissatisfaction with oneself may be interpreted as a kind of insecurity or lack of confidence in oneself. According to Gill and Orgad , individuals are now expected to take responsibility for their self-esteem and, at the very least, act confident at all times if they want to be worthy of love and success. 'Health as balance' discourses function to reinforce this idealisation of emotional restraint and perpetuate confidence culture by constructing emotions like guilt and shame as unhealthy. In addition to the emotions mentioned above, participants also needed to avoid too much enthusiasm or zeal in relation to pursuing health improvement. This temperance of emotions may be related to dominant gender norms. While both men and women in the study appeared to be attempting to reduce any strong emotional expressions about their experiences of trying to stay healthy, the gendered norms which align with this tendency are different, and this specific kind of emotional temperance did seem slightly more prevalent among the men in the study. Men are prohibited from expressing too much enthusiasm or investment in the pursuit of health ideals, as this would contradict acceptable notions of masculinity linked to stoicism and emotional restraint as well as the social pressure to be 'risk takers' . Women on the other hand may attempt to distance themselves from historical critiques of femininity as overly emotional, too sensitive and caring too much to access socially preferred feminine identities. Within d discourses, this careful emotional regulation functioned both as a technique of discipline and a technology of the self . Through this discourse, individuals could abide by the socially necessitated pursuit of health without compromising their personality ideals and also ensuring that they complied with socially rewarded gender norms. We also see the interaction and mutual reinforcement of health and gender discourses: the 'health as moderation' discourse functions here to reproduce dominant gender ideals. While engaging in this kind of active self-management of one's emotional reactions appeared, on the surface, to serve the participants by ensuring that they could enact socially approved subject positions, politically this may be problematic. If strong feelings are repressed and silenced through the constant self-management of emotions, resistance to oppressive discourses is made much more difficult and the status quo is allowed to persist. The discourses discussed in the results section highlighted the importance of carefully moderating one's behaviours in order to avoid either of the undesirable health extremes. Cairns and Johnston's concept of 'calibration' may be useful when trying to understand the way 'health as balance' discourses are experienced. They make use of this term when discussing middle class women's relationships to food and define it as, 'a practice wherein women actively manage their relationship to the extremes of self-control and consumer indulgence in an effort to perform acceptable middle-class femininities' . Individuals made sure to carefully 'position themselves as conscientious, but not fanatical' to avoid being pathologised as 'health-obsessed' but also to avoid the opposite extreme of overindulgent or abject. The desire to distance oneself from the 'health fanatic' categorisation is especially relevant to the current study. According to Cairns and Johnson the 'health-fanatic' is associated with 'overly perfect' performances of healthy femininity, the feminine subject who 'is too informed, and too controlling in her eating habits' . The careful management of identity required to avoid the two extremes, illustrates the sharp borders of acceptable performances of health and demonstrates the connection between discourse and subjectivity . Although Cairn's and Johnson's paper focuses on middle-class femininities in a Western context, in this study 'health as balance' discourses were used in a similar way by both men and women from varying income backgrounds and in relation to both food and exercise. This notion of 'calibration' could still be applicable here and may illustrate the increasing reach of discourses which require a constant policing and moderating of behaviours and subjectivities. Although men and women, as well as those from different income groups, seemed to be calibrating their relationship to health, this kind of calibrating may perform different social functions for those located in different economic contexts or exposed to divergent gendered norms. 'Health as balance' discourses allowed for the alternate problematising and legitimating of discourses which require individuals to maintain perfect self-control. They enabled the participants to avoid some of the negative aspects of discourses which promote perfect self-control while still allowed access to some of the moralized benefits thereof . While this appears to be benefiting participants by facilitating the constitution of favourable subject positions, it also individualises the responsibility for health and upholds broader neoliberal discourses that obscure the social and structural determinants impacting on peoples' health. By concentrating the focus of health promotion on these exclusively individual activities, oppressive social structures go largely unquestioned and uncriticised. In addition, the construction of these desirable and undesirable ways of being healthy resulted in the moralisation of health and being 'insufficiently healthy', or failing to exhibit an 'appropriate' emotional orientation towards health, was constructed as morally problematic. This perspective legitimates a culture of victim-blaming where individuals who are not sufficiently healthy, as well as those who are emotionally distressed as a result of the pressure to maintain their health, are constructed as personally responsible for their difficulties. Individualising and moralising health serve political functions in that they legitimate a reduction in welfare provisions and obscure the need to address the social determinants of health. If individuals are personally responsible for their health, there is little incentive for costly welfare programmes or attempts to improve access to resources or address inequities. If health is moralised, then adherence to idealised health behaviours does not need to be externally policed as individuals internalise the responsibility to discipline themselves . Lastly, the 'health as holistic' discourse serves to construct a health ideal which requires the individual to successfully manage all aspects of one's life to ensure that they are all serving the goal of health. At previous points in history, holism encompassed a broader sense of connectedness and unity between individuals, communities and the environment . The more prevalent modern use of the word may have come about in opposition to more isolated, medicalized, treatment-focused approaches to health, which were sometimes deemed ineffective or even harmful. However, the neoliberal, individualized context in which the idea has recently become popular has shifted its focus from a tendency towards community and support, to a tool to encourage the permeation of internalized disciplinary techniques and self-adjustments through all aspects of an individual's life . --- Conclusion Discussions around health and balance were often closely linked to the idea of control with participants often using 'health as balance' discourses to alternately resist and affirm the demand for perfect self-control. By using 'health as balance' discourses, participants were able to construct a variety of sometimes ideally contradictory subject positions and to maintain a sense of self-esteem and social acceptability. These discourses are facilitated by ideologies of healthism, confidence culture and broader neoliberal discourses which include the idealisation of individual responsibility. 'Health as balance' discourses were often used to challenge the pressure to conform to unattainable standards of health behaviour, and to avoid negative emotions such as anxiety, stress, guilt and shame. However, they also functioned in ways that individualised the responsibility for health, moralised and depoliticised health and increased the requirements for attaining idealised 'healthy' identities. As a result, they worked to produce stress, limit emotional expression and silence dissent. This study provides insight into how seemingly innocuous discourses can be used in ways which reproduce restrictive or oppressive ideologies. An awareness of, and critical reflection on, these kinds of adverse consequences and a sense of caution surrounding the use of behaviour change interventions which reinforce the individualisation and moralisation of health, is an important implication of this study for public health research and practice. This is particularly important in a South African context which is highly unequal and where health inequities are attributable to a range of social and structural determinants. Individualising and moralising health in this context, and obscuring the importance of these other significant factors, undermines efforts to achieve health equity and perpetuates injustice by fostering a culture of victim-blaming and individualism. In addition, acknowledging the importance of the meanings attached to health, and the consequences these have for identity, experience and action, may be helpful for those attempting to promote health or provide the public with educational health resources, or in the development of health-related policies. This study was limited in that the sample was small, mostly middle-class and healthy. 1 Focusing on participants who described themselves as healthy at the time of the interview likely had important implications for the ways in which they constructed health and illness. Future research exploring how these kinds of health discourses are taken up by those from rural areas or those who are living with a chronic illness, mental illness or disability would provide a more in-depth understanding of how these discourses function and how those in more vulnerable positions are affected by their prominence. --- Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. --- --- Authors and Affiliations --- Michelle De Jong
Social science research on health in South Africa tends to focus on illness and how to address health problems. Qualitative empirical research focussing on lay understandings and experiences of healthiness, or health discourses, in South Africa is fairly limited. This article addresses this gap by critically exploring how young South African adults used discourses of balance to make sense of what it means to be a healthy person and highlights the implications of these discourses for identity. Foucault's concepts of 'technologies of the self' and 'techniques of discipline' are discussed as a theoretical grounding for this paper. Data were collected from 20 in-depth semi-structured interviews, and analysed using Foucauldian discourse analysis. This paper will specifically explore a key discourse identified through the analysis: 'health as balance' and 2 interrelated sub-discourses which fall within it. Through this discourse, healthiness was constructed as requiring a broad focus on improving all aspects of one's life ('health as holistic') and the avoidance of any behaviours or emotions which could be classified as extreme ('health as moderation'). Constant, careful management of the self, or 'calibration', functions to both perpetuate a cycle of 'anxiety and control' and to obscure ways in which health discourses can be harmful or problematic.
Introduction While much research about the ethics of information and communications technologies has not paid sufficient attention to the context into which ICT is introduced, treating ICT reception in all parts of the globe as more or less similar to how it would be received in a "Western" country, for the last decades there is a stream of research-intercultural information ethics -that does take cultural and national differences into account, arguing that "culture matters" [10]. What scholars within IIE show is that one can far from neglect culture-the values, beliefs, and practices of people-when discussing how ICT is received and translated into different cultural contexts, nor for creating a normative ethical basis for the acceptance or rejection of particular ICTs. Modern information technology brings people from all over the globe into contact in broader, deeper ways than many other elements of globalisation. Discussing culture in the context of information ethics is therefore crucial. However, we also believe that the full potential of IIE is not yet fulfilled. A recurring problem that is identified within the field is ethnocentrism that despite years of scholarship remains "intransigent, if not predominant" . But more importantly, we will argue that the field relies on a too rigid understanding of culture, a concept that despite its central status within IIE scholarship has been argued to not be well explained nor discussed in depth [39]. As will be discussed in the subsequent part, the dominant theories of understanding culture within IIE seem to be those of Edward T. Hall and Geert Hofstede, which argue that people who belong to a certain country tend to share certain characteristics, as if they had a collective programming of the mind distinguishing them from other groups of people [20]. Such frameworks could be seen as essentialist, since they hold that people belonging to a culture share some sort of essence-a way of being, thinking, and acting-that distinguishes them from others. In contrast, and much more marginalised within IIE, there are non-essentialist frameworks that do not argue that culture is related to essences, but rather discuss how the concept of culture is used, how it is mobilized, how and why one identifies as a part of a culture [8,28]. Such non-essentialist understandings are a good counterweight to the overarching understanding of culture, which overemphasises the commonalities and downplays the complexities of cultures [39]. Inspired by the non-essentialist claim that culture is something we do rather than something we are, and sparked by a fascination with the concept of sutureliterally meaning the thread which stitches, or the act of stitching, a wound-we move beyond IIE and related fields to explore the notion of suture and if and how it can help us understand how we "do" culture, how we use the concept, how we mobilise culture in our everyday encounters and research. It is not farfetched to revisit the concept of suture since it has been used to make sense of the matter of identification. However, according to Žižek, although it was used as a defined concept within film studies, it has now lost its specific mooring in film studies and is used within the "deconstructionist jargon, functioning as a vague notion rather than a strict concept" ( [48], p. --- 31). With an interest in returning The Review of Socionetwork Strategies 15:71-85 to its theorization in film studies, to see whether the concept can be used more specifically than just 'closure' , we survey three distinct renderings of suture: something which ties the viewer of a film to a character, something which ultimately fails and exposes the evil behind its conditions of production, and something which ties the viewer in fleeting encounters with different characters and their relationships in the film, recognising that the viewer will never fully identify with one character. Through the use of these three interpretations of suture, we re-interpret how scholars in IIE and beyond have used cultural concepts, to understand better what we do when we meet with strangers in cross-cultural meetings, such as cross-cultural research. If culture is seen as suture, then we can argue that the use of culture sutures someone to a certain culture, the use of culture is not successful but exposes that the conception of culture that one is tying someone to is stained by the conditions of its production, exposing these conditions, and the use of culture ties someone to a culture but at the same time acknowledges that this suture is temporary, coloured by its conditions of production and relationships between this someone and others, and that it is open to re-interpretations. We side with the third way of suturing with culture, arguing that its implications for research and practice are that we can perhaps not avoid suturing, but that each suture should be seen as an invitation to expose the conditions of its production, and de-suture, to eschew a sutured, complete, fixed understanding of culture, and a mere unveiling of the flawed and biased nature of cultural concepts. The form of this paper follows the same explorative and tentative nature as the third way of suturing, not fully suturing the concept of culture to the concept of suture, but rather playfully exploring what the consequences would be if culture is regarded as suture. The text was produced within a project comparing the ethics and sustainability of ICT in Japan and Sweden, and the paper is thus shaped by these, and probably other, conditions of production. Before we turn to the concept of suture, we will give a brief description of the field of IIE, to which we tentatively suture this study. --- Intercultural Information Ethics Although one cannot deny that studies on the ethics of ICTs are often conducted in a Western context or based on Western concepts, since around two decades back, a stream of research has been emerging, called IIE. With associations and routes spanning UNESCO's First International Congress on Ethical, Legal and Societal Aspects of Digital Information, the Cultural Attitudes towards Technology and Communication conference, the International Center for Information Ethics , the ETHICOMP conference and the CEPE Conference-Computer Ethics: Philosophical Enquiry, in the end of the 1990s, a discussion about the role of culture emerged related to the increasing penetration and spread of ICTs [4]. In Bielby's overview of the field, one of the early proponents Rafael Capurro is quoted at length. Capurro holds that there is a need for an IIE to critically discuss "the limits and richness of human morality and moral thinking in different societies, epochs and philosophic traditions as well as on their impact on today's social appropriation of information technology" . In such a debate, it is important to keep an open mind, and have a patient and respectful dialogue between cultures. At last, Capurro maintains that: "This is […] an incentive to enlighten our minds and lives with regard to the open space of thought and the groundless world we share, which allow us to remain in an endless process of intertwining society, nature and technology, looking for flexible norms that regulate rather than block such a process" Throughout the history of the field of IIE, we have seen a number of empirical examples of how local cultures are resisting, or translating and modifying ICTs. Effectively debunking the idea of the Internet creating a global village, as Ess [9] with the use of Hjarvard [19] points out, rather than a global village, what will be created is a global metropolis with its different areas, connections, and movements. Within such a metropolis, there will be conflict and collisions. Ess [9] suggests: "more fruitful collusions can be documented in which persons and communities find ways to resist and reshape Western-based [computer mediated communications], so as to create new, often hybrid systems and surrounding social contexts of use that work to preserve and foster the cultural values and communicative preferences of a given culture" . Empirical examples illuminating the culture-and context-dependency of attitudes towards ICTs as well as the use of ICT are numerous. Apart from national studies of ICTs from all over the world, there are also a range of cross-national studies within the field, for example Murata's studies on attitudes towards Edward Snowden, which compared youngsters' attitudes towards privacy and state surveillance in eight countries [32,34,35], and Murayama et al. [37] which compared peer-to-peer software usage in Japan and Sweden. There are also a range of comparative studies based on the technology acceptance model or variations of it. For example, Murata et al. [33] examined how ethical awareness, innovativeness perceptions and perceived risk influence the decision to become a cyborg or to use insideables, analysing whether cultures as different as those of Japan and Spain show different results. Furthermore, Fors and Lennerfors [14] and Majima et al. [29] studied the way sustainable ICT has been interpreted in Sweden and Japan respectively, and comparing those two papers, the interpretations are markedly different. For example, sustainable ICT was in Japan primarily constructed as an energy-saving technology, while in Sweden, there were also discussions about ethical and environmentally sustainable manufacturing and disposal of ICT. This was not explained as a cultural difference, but rather depending on the configuration of actors and networks in the different national contexts. When it comes to privacy, the right to which has always been considered to be posed a potential threat to by the development and use of ICT, various discussions have been made reflecting the differences among cultures . Apart from these more descriptive studies of how various technologies are received in different cultural contexts, there are also discussions about normative foundations for an IIE. Given that global communication technologies, global The Review of Socionetwork Strategies 15:71-85 currencies, and global networks bring people living in different geographical parts of the world into more direct contact with each other, how should we reason about the normative foundations of the ethics of technology? Although the pure positions of universalism-that there is a given set of universal values-and relativism-that what is right in one part of the world is wrong in another-are theoretically interesting, Ess [11], Hongladarom [23], and others, are looking for a more pluralistic base where one can find connections between different culturally situated normative structures, that can together, complementing each other, create a normative platform for a global ethics. In such studies, it is pointed out that rather than just looking for differences, we can see similarities and complementarities, for example, between virtue ethics and the ethics of junzi [9,24,25], between phenomenological, feminist, and Watsuji Tetsuro's ethics [30]. Rather than obliterating cultural, historical, and traditional expression, we should seek an ethics that preserves these cultures, while at the same time seeks for ethical agreement in this global society-the pros hen [9,11]. In the above description of the field, we see a tendency towards separating the world into cultures with their respective cultural values that should be respected, and that for constructing a normative base we should be respectful of ethical systems belonging to the culture. Furthermore, as quoted in the introduction, there seems to be a tendency within the field towards ethnocentrism, meaning that not only are cultures different, but the culture that the first-person is belonging to is superior [10]. While it is easy to criticise ethnocentrism, the division of the world into different cultures with their own common values and practices, separating them from others, is also problematic. Elin Palm [39] argues that the field of IIE suffers from a lack of reflection on the very core concept it is building on, namely the concept of culture. She points out that to undertake meaningful comparisons of cultures, a better understanding of the key concept of 'culture' as well as of 'cultural' is needed. A main argument in Palm's paper is that the concept of culture is treated too simplistically and too homogeneously in the field of IIE, assuming that there are large groups of people sharing practices, values, and beliefs. By doing so, the field can sometimes risk running into stereotypical understanding of for example Western and Eastern understandings, while reality is more complex. A more nuanced understanding of culture is needed. Still, from this overview of the field, the dominant notions of culture within IIE are essentialist. In contrast, we propose a conception of culture which is non-essentialist, arguing that what is most important is not what "culture" is but how people use "culture". So how is culture used in IIE scholarship and beyond? We believe that the notion of suture could help us understand how culture is used and why. --- Suture In everyday language, suture means "stitch", and is commonly used within medicine as the thread that holds bodily tissues together after a surgery or injury, or as a verb which concerns the act of stitching a wound. Theoretically, this concept was mentioned by Lacan, but not systematically used. For example, Bruce Fink describes that Lacan used the concept of suture when discussing the relationship between psychoanalysis and science [13]. Here, science is seen to "suture" the psychoanalytic subject, in other words, exclude it from its field, neglecting it. Lacan's disciple Jacques-Alain Miller [31] developed the concept, and argued that it was a fundamental part of Lacanian psychoanalysis. As explained by Žižek, suture for Miller designates the relationship between the signifying structure and the subject of the signifier , which one could interpret as the relationship between a symbolic, social system, and a person/subject in the system. Alain Badiou [2], who is influenced by Lacan, used the concept to explain how philosophy is becoming sutured to its conditions of truth, namely politics, science, love, and art. The main point is that philosophy, according to Badiou, has become too stitched to its different domains, and needs to de-suture itself and maintain independence while still being related to the domains. Suture, for Badiou, is therefore a term which indicates that something is tied down or connected to something in a way that does not unleash its full potential. Suture is "that reciprocal parasitism of philosophy and its conditions which periodically announces the weakening or abdication of thinking" . Although there is an intricate and dense argumentation by Žižek [49] about the positions of Miller and Badiou on the concept of suture, from now on we mainly follow the discussion of the concept in film studies where it has been used as a strict concept, going beyond the more general use of the concept as synonymous to 'closure' . This has provided three conceptions of suture, which will be explained in the following. --- Suture as Tying the Viewer to a Character In a pedagogic overview of the use of suture in film studies, Chaudhuri describes that suture is used in film studies to "describe the methods by which viewers are absorbed into the narrative and encouraged to identify with characters" . The technique of shot/reverse-shot, which will be explained below, is often seen as central to suture. Imagine that shot 1 is of a view of the sea. The shot follows the 180 degree rule which means that the camera can only capture the half circle in front of it. This first leads to a positive feeling of unbounded plenitude, but then arouses a sensation in the viewer-from what position or what gaze is the shot taken, "whose gaze controls what it sees"? . The camera bears the traits of a powerful symbolic father, with absolute knowledge, self-sufficiency, and discursive power , which the viewer is lacking. Therefore, shot 2 sutures this wound by locating a spectator in the other half of the field of vision, implying that the former shot was taken from the point of view of a figure in the narrative. Through this operation, Chaudhuri [6] describes, viewers of the film are 'stitched' or sutured into the subject-positions that the films make them have, which also makes them identify with the gaze of the person seeing. According to Chaudhuri "The healing of narrative can only happen after the wound has been inflicted, and the more wounded we are, the more desperate we become for meaning and narrative" . In this notion of suture, there is a sense of miscognition, since the shot/reverseshot technique binds the viewer into a subject position . In the same 1 3 The Review of Socionetwork Strategies 15:71-85 way as the viewer misrecognises and is sutured into the narrative of the film, Silverman argues that this is at work more generally. For Lacan, misrecognition is the basis of the ego. This is formed during the Mirror Stage, where a child identifies with an image outside itself, usually its reflection in the mirror. Silverman argues that the child has the sensation of 'Yes, it really is me!' . Chaudhuri explains: To describe the infant's jubilant identification with its mirror image, Lacan uses the term 'captation', evoking the infant's 'capture' and 'captivation' by the imaginary. Captation also occurs when the subject identifies with other external images-including cultural representations. There, too, it recognizes itself. The Mirror Stage thus forms part of the series of misrecognitions through which the ego is constituted. It signals that the ego, which we think of as the core of identity or bearer of reality, is actually illusory. . We see this misrecognition, where the viewer is tied to a certain subject position, as well as when a person is tied to a truth-claim about culture, as essential to this first understanding of suture. --- Exposing the Gaze of the Objective Shot Žižek revisited the concept of suture in the 1990s and 2000s, to argue for a renewed understanding of the concept. He claims that: suture is the exact opposite of the illusory, self-enclosed totality that successfully erases the decentred traces of its production process: suture means that, precisely, such self-enclosure is a priori impossible, that the excluded externality always leaves its traces within-or, to put it in standard Freudian terms, that there is no repression without the return of the repressed . A potential way to understand this is to look at a scene from Hitchcock's The Birds, which Žižek has discussed in a few Youtube videos. The birds have just attacked a gas station in Bodega Bay, California, and Melanie, the lead character, looks at when it catches fire. There is a series of standard exchanges with shots showing Melanie looking and shots of the fire spreading from a car until the whole gas station is on fire, a kind of standard procedure of suture . The next shot is then taken from above, a kind of God's view, which leads to a calm, objective, clear sense of the situation. But the situation changes. There is an ominous sound from the birds, and then we can see a bird entering the camera shot, and then one more, and it starts to seem like the objective shot was taken from the point of view of an evil, that beneath the seemingly objective shot, is pulling the strings. As Žižek says: "the shot which was taken as a neutral shot all of a sudden changes into an evil gaze, the gaze of the very birds attacking" . This is what could be meant by sutures as not self-enclosed totalities. Rather than discussing the way in which the subject is sutured to a certain subject position, this second understanding concerns that the objective frame, the allegedly neutral shot, is itself sutured to a particular gaze, to some conditions of production. --- Suture: Fleeting Identifications with Different Characters in the Film Not fully content with either of these interpretations of suture, Richard Rushton in a recent paper proposes a third understanding of suture, which offers a way out of either the deception of suture, or the fact that suture will never work fully, but there will always be remainders [40]. He draws on political theory and Freud's ideas of joke work, and reaches an understanding of suture in cinema where the positions of identification-the sutures-shift and change throughout a film. The reason, he argues, is that rather than identifying with a specific character, the viewer identifies with a set of relationships between characters, in other words, what "characters to in relation to other characters, which decisions characters make or actions they perform in relation to other characters and to the story world more generally" . Any suture will thus always be temporary and open to the possibility to change. He explains: "suture will fix the spectator at certain points in the unfolding of a film's story-suture is, after all, a matter of freezing or arresting the subject-but such fixing will never be definitive or totalizing. It will instead always be temporary and will adapt from moment to moment" . From this point of view, suture takes a more temporary and flexible character, where subjects are sutured but that these sutures change over time. Being inspired by these three formulations of suture, we will use them to interpret what we do when we use the concept of "culture" in IIE and beyond. In the Sect. 4 we connect the first understanding of suture discussed by Chaudhuri to culture. In the subsequent Sect. 5 we connect Žižek's understanding of suture to culture. In the Sect. 6, we link an understanding of suture inspired by Rushton to culture. --- Suture as Cultural Misidentification In this section, we discuss the various truth-claims that are done with respect to cultural identity in research and practice. In our everyday experience, we often use the concept of culture to compare and contrast different people, thus making claims about differences between the peoples in question. Such statements are ubiquitous, but perhaps taken to their extreme in the grand scale cultural frameworks that are used in cross-cultural research, where different cultures can be discussed in terms of a set of parameters. Such truth-claims seem scientific, measurable, and essentialist. These frameworks are popular, and so also in the field of IIE. In the context of Internet communication cultures, a special issue [12] discusses the merits and limitations of two such theoretical frameworks of culture, namely that of Hall and that of Hofstede. In his retrospective review of the CATaC conferences, Ess [10] explains how the theoretical resources of Hall and Hofstede have been a central point of reference. --- 3 The Review of Socionetwork Strategies 15:71-85 This is not to say that most agree with these frameworks, but they have become a common ground for discussing culture [10]. Furthermore, Palm [39] discusses the use of culture in IIE and Hofstede's framework alongside other frameworks which tend to essentialise cultural identity. Ess [10] and Palm [39] are good indicators that essentialism in the form of these frameworks is predominant within IIE. Before connecting these notions of culture to suture, we will briefly present the frameworks. Hall discusses how cultures can have high context or low context, that they can be more or less monochronic or polychronic in terms of time concept, the speed of information spread amongst different cultures, and also relates to action chains , and particularly how willing a member of a culture is to complete or break action chains [17]. Seen from this perspective, Hall identifies Sweden as a low-context culture, while Japan is seen as a high-context culture [16,17]. According to Hall, monochronic cultures include Germany, Canada, Switzerland, United States, and Scandinavia , while Japan has a mixed relationship to time . In Japan, action chains can be broken while Americans "are brought up with a strong drive to complete action chains" . Hofstede et al. [20] developed a theoretical framework that consists of several dimensions: power distance, individualism/collectivism, uncertainty avoidance, masculinity/femininity, long-term orientation, and indulgence vs. restraint. Countries are scoring higher or lower in these different categories. For example, there is more expectation and acceptance in Japan than in Sweden that power is distributed unequally. Sweden scores as a more individualistic country than Japan. Given Japan's score on masculinity and femininity, the gender roles seem more distinct than in Sweden. Although these frameworks seem to have some explanatory power, the special issue by Ess and Sudweeks [12] as well as Palm [39] pointed to several shortcomings. First, it presupposes the existence of a rather homogenous national culture, while a nation can have a multitude of cultural expressions, belonging to different social practices. As Bielby [4] argues, there can be great religious, cultural, and philosophical diversity within the same region. Furthermore, even in the same tradition, as Wong states in regards to Confucianism, there are a variety of sub-traditions such as Neo-Confucianism and New Confucianism, "and, the problem of complexity multiplies once we consider Chinese culture as a whole, which is constituted by Confucian, Daoist and Zen, and each has their own moral systems" . In terms of the individual vs. collectivism view, Teranishi [44] mentions the tradition of Japanese individualism which is somewhat different from Western one, and Japan's collectivist culture has been questioned by many [15,43,47]. Second, these frameworks neglect the hybridisation that comes from technological mediation-namely how communication over the Internet reshapes and changes culture. Although ICTs do not create a global village, there is a much more possibility for hybridization, new identities, and new cultural expressions than before. Furthermore, Hofstede's framework has oftentime been critiqued methodologically. When raising the view beyond IIE, a similar discussion is taking place within the crosscultural management literature, where apart from Hofstede's model, also the essentialist models of culture by Trompenaars and Hampden-Turner and GLOBE research. And, the mentioned critique against these frameworks within IIE also exists within cross-cultural management [38]. Although there seem to be persistent discontent with these frameworks and quite radical critique against them, they are still used in cross-cultural research. Holliday [21] argues that although the problems of essentialism are accepted and critiqued, 'the temptation to be essentialist is quite deeply rooted in a long-standing desire to "fix" the nature of culture and cultural difference' . In line with the first understanding of suture, the nothing is replaced by something . Similarly to the psychoanalytic processes initiated in the mirror stage, one is quick to identify themselves or others with a complete mirror image: "Yes, this description really fits me/him/her!" If one turns to the theory of suture, why do we suture in this way? An explanation would be that one is quickly looking to suture the uncertainty and anxiety caused by facing something or someone unknown-a stranger coming from another "culture". One is then, in this interpretation of suture, content with the way in which the cultural suture covers over the lack, although when one thinks about one perhaps knows that these frameworks only partially captures what culture is. Perhaps one knows that the theory is not correct, but one identifies with the fiction to get closure. Such suture can be enacted in any practice, as individuals having to deal with those who are different, or as researchers trying to make sense of culture. We propose that this is what underlies Holliday's mentioning of a long-standing desire to "fix" the nature of culture and cultural difference. This fixing is suture in the first sense of the term. --- The Evil in the Suture In the second understanding of culture as suture, following Žižek, we will draw on the idea that the seemingly neutral, and objective point of view, is seen as the point of view of some external, ominous agent. The procedure of this second kind of suture is thus not that individuals, such as researchers, make truth-claims about culture, but rather that the seemingly neutral and objective view provided by such truth-claims, particularly with the most seemingly objective and neutral, such as Hofstede's, suddenly are reframed and become sutured to something evil. An example of this is Galit Ailon's critical reading on Hofstede's book Culture's Consequences. She stresses that many cross-cultural analyses such as Hofstede's are skewed and biased in favour of the West, and in many ways represent "the non-West in ways facilitative of Western cultural hegemony, political dominance, and sense of positional superiority" . Although her critique is mainly directed to international management literature, where she quotes Westwood stating that crosscultural methods are "often reductionist, or else severely decontextualising, incorporating simplifying representational strategies that do violence to the inherent complexity of the social systems they pretend to represent" , it is important to heed to the warning also in the context of IIE. Apart from devaluing and misrepresenting "the Rest", Hofstede's text as well as others, --- 3 The Review of Socionetwork Strategies 15:71-85 according to Ailon, also overvalue the West: "Idealizing themselves, they construct self-images in a desired mold that, once mirrored, cracks under its own inconsistencies" . Rather than suture as cultural misidentification, this understanding of suture ties truth-claims related to culture to its conditions of production. What is identified is how the theories, although seemingly objective, are not only potentially misrepresentative but also informed by a particular gaze that is not rendered explicit in the theories, but that can and should be exposed through critical readings. Perhaps, from the point of view of Žižek's understanding of suture, one could argue that it is the objectivity of the cultural frameworks that creates an eerie feeling of something not being right, that there is some evil lurking in the background. --- Multiple, Repeated Sutures For Rushton [40], there needs to be a third interpretation of suture. The first interpretation is tying the subject to a fixed position, while the second exposes the gaze that guides the suture. However, what Rushton looks for is a way out. A possible way out, which we develop with inspiration in Rushton's paper, is to see sutures as always temporary, always containing the conditions of production, and always serving as an invitation to de-suturing. This eschews both the first interpretation of suture which tells that we as subjects are duped, but also the second interpretation of suture, where we as subjects should expose and unveil the evil in its production. Rather, in this third form, we know that culture is a suture, not being too surprised by it, but rather approach these attempts to cover up as temporary and subject to re-interpretation. This could lead to a more playful and creative discussion about how we use culture, one in which we always expose the conditions of its production, suture again, and recognize that we constantly fail in describing culture or cultural differences. This is as mentioned in the introduction based on a non-essentialist idea of culture. Rather than discovering things that pertain to our cultural identities, cultural identities are "the names we give to the different ways we are positioned by, and position ourselves within, the narratives of the past" . The positions, the strategic "cuts" of identity are always strategic and arbitrary, in a similar way to which the symbolic in Lacanian terms always fails to capture the whole of the Real . When doing cross-cultural research, we should thus not find what is similar or different but "which elements of difference do people articulate as important and when? Which facts do they highlight? What values do they attribute to difference? What is the discursive impulse that underlies their preoccupation with it?" . The idea of multiple sutures proposes that we constantly suture, and perhaps we cannot help but suturing. However, we need to understand that all sutures are only temporary, and that they should be seen as an invitation to think again [41]. We need to understand that sutures are formed in relationships, as a result of the particular relationships and situations that we find ourselves in at the moment. In cross-cultural encounters, for example in research, there is a need for openness. Rather than contributing to a sedimentation of knowledge only, thinking about cultural difference as well as thinking in general is a way to attend to the moments when an idea, position, identity begins to repeat itself, and hardens into dogma. With such a view of culture, another set of questions emerge: How is culture mobilised in different encounters and relationships on the micro-level? For example, why does culture or national identity become mobilised as a resource in a debate about ICT? How do we suture, and how can we adequately de-suture? How can our thinking about culture not harden into dogma and rather be flexible and open? When it comes to research about the cross-cultural, one needs to pay careful attention to the micro-processes that unfold. For example, it is likely that in a team of researchers from two countries, the researchers from one country are posited in a relationship where they are constructed as the experts of their particular country, and become representatives of it. This perhaps can lead to a situation where they are expected to have quite clear and distinct understandings of their own country and contrast them with the equally clear and distinct understanding of the researchers from the other country. This could lead to sutures as misidentification. Furthermore, in such research, it can become an aim in itself to find differences, because they are perhaps what seems particularly interesting-otherwise, what would be the idea of the crosscultural study? This relationship between researchers could then lead the researchers to fervently try to map out how different the countries are, which could perhaps even lead to exoticisation of native concepts. Rather we could see each suture as temporary, in which we always expose the conditions of its production, suture again, and recognize that we constantly fail in describing culture or cultural differences. --- Conclusions In this paper, we have advanced an understanding of how culture is or could be used within IIE, inspired by three different readings of the concept of suture. Based on the first understanding of suture-namely suture as cultural misidentification-we have also tried to explain the allure of essentialism, and why it is so such an easy recourse although one might know that it is not an accurate understanding of culture. The second understanding, which evokes and makes us expose the conditions of the production of cultural truth-claims, is also central to understand when we discuss a non-essentialist view on culture. In other words, each suture with culture is always from a certain often implicit gaze. Although the mentioned understandings of suture can to some extent diagnose part of the literature on how culture is used in everyday encounters and within research, we believe that the third understanding offers a more viable way forward. In the third understanding, culture could be seen as a series of sutures that are always incomplete, always formed by their conditions of production, and are invitations to further exploration. Such a perspective provides an alternative to the ethnocentrism and essentialism in IIE. The proposed idea of looking at culture as multiple sutures invites researchers to a quest to better understand the role of culture in IIE. It does not offer a quick-fix-a suture-but rather indicates a way forward that is hopefully more productive but also much more difficult and demanding to tread for the reasons discussed in the paper. Incidentally, --- 3 The Review of Socionetwork Strategies 15:71-85 another contribution is that this paper has hopefully shown that through a broad reading from various fields we can gain knowledge in compartmentalized debates in contemporary academia. On a meta-level, this paper is thus also a plea to de-suture understandings of culture, or other concepts, from particular subfields. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Intercultural information ethics (IIE), a field which draws on the limits and richness of human morality and moral thinking in different societies, epochs and philosophic traditions as well as on their impact on today's social appropriation of information and communication technology, has been argued to lack an adequate theoretical understanding of culture. In this paper, we take a non-essentialist view of culture as a point of departure and discuss not what culture is, but what we (both in our everyday lives, and as researchers) do when we use the concept of culture. To do so, we look for inspiration in the concept of suture, a concept which means the thread which stitches, or the act of stitching, a wound, but has had a long and intricate journey within psychoanalysis and film studies concerning the issue of identification. Three understandings of the use of culture emerge: suture as cultural misidentification, the evil in the cultural suture, and multiple, repeated cultural sutures. We use these categories to diagnose the use of culture in IIE and beyond, and suggest that the use of culture as multiple, repeated sutures-in other words, a recognition that we constantly fail in describing culture or cultural differences, and that each suture is coloured by its conditions of production, and that we cannot but suture with culture anyway-might be a way forward for cross-cultural research.
Introduction The fact that urban environments are highly stressful has received a lot of attention in recent decades. Green environments, on the other hand, have been found to have highly restorative effects on humans' psychological and physiological indices [1][2][3][4][5][6][7]. Restoration may be defined as the 'renewal of physical and psychological adaptive resources depleted in ongoing efforts to meet the demands of everyday life' [8]. A complementary definition relates to the psychological recovery triggered by environmental factors [9]. Two theories complement each other in explaining restorative effects: recovery from stress by psychophysiological responses and attention restoration theory leading to fatigue and cognitive failure [10,11]. Restorative environments physiologically improve the autonomic nervous system balance [12][13][14][15], emotionally reduce stress level [16] and improve cognitive activity in daily life [17,18]. However, there is no complete isomorphism among the three aspects of physiological, emotional, and cognitive restoration, and the relationship among them remain ambiguous [19][20][21]. One of the more common indices for measuring environmental restoration is the Perceived Restoration Scale [1,19,22,23]. These studies measured the differences in the PRS values in urban environments [1,24], with the majority pointing to the high PRS score associated with green environments compared to built-up environments. However, this assertion has not yet been fully substantiated due to limited studies that are based on small samples and that employ different methodologies and indices [25]. At the same time, not all commonly used response indices are affected in the same way by visits to green environments [25]. Furthermore, new studies highlight the restorative effect of some built-up environments [1,[26][27][28][29][30], as well as small gardens in the city [31]. To allow for restoration in urban parks, a balance between enclosures that stimulate a sense of being in nature and openness that allows depth and height of sight, as well as porosity of boundaries that stimulates a sense of safety, is required [31][32][33][34]. Several studies have found that a lack of balance between closure and openness reduces the level of restoration [35][36][37]. Only a few studies have looked at how restoration experiences differ across sociocultural groups [11]. Several studies have shown differences in ethnic groups' physiological responses to environmental nuisances based on lifestyle, exposure to discrimination and, possibly, genetic differences [38][39][40][41]. Furthermore, minority groups may have less access to greenery for cultural or discriminatory reasons [42]. Almost all of these studies looked at emotional and physiological differences between ethnic groups in coping with environmental nuisances. We found only one exception, a study by Saadi et al. . This study indicated that Arab and Jewish women cope differently with environmental nuisances as reflected by their emotional, physiological, and cognitive responses [43]. These studies support the hypotheses that argue for conceivable differences among ethnic groups' experiences of restoration in green spaces. Since PRS appears to be one of the more common indices employed to measure the effect of greenery on human stress and health, it is important to study to what extent PRS is sensitive to ethnic differences. The fact that Arab and Jewish womens' autonomic nervous systems respond differently to environmental challenges [43], begs the question as to whether PRS is also effected by ethnicity more relevant [11,[38][39][40][41]43]. In this research, we ask the following questions: 1. To what extent do women experience deep levels of restoration as measured by the PRS in green environments compared to urban environments. 2. Do Arab and Jewish women differ in their experience of restoration in urban environments and parks? 3. Are there differences between Arabs and Jews in the relative effect of major environmental and social mediators on PRS? In the present research we aimed to elucidate the restoration effect provided by different environments on young women of two ethnic groups; Arab and Jewish and to assess the relative importance of greenery while accounting for different ambient nuisances. We further aimed at evaluating the sensitivity of the PRS in reflecting restorative effects in these two ethnic groups. The purpose of the study is to compare groups of young healthy women of Jewish and Arab origin who live in segregated small neighboring towns in Israel in terms of their experiences with PRS in central city, residential, home and park environments. Since women and men differ in the way they respond to environmental challenges with women being more sensitive to environmental challenges compared to men [44,45], we focused our study on healthy young women only. Therefore, the main contribution of the study is in highlighting the effects of ethnicity on PRS in women in their fertile age. --- Background Adherents of attention restoration theory argue that people are highly stimulated in urban environments, demanding high levels of directed attention. People must concentrate their efforts in order to manage a variety of complex sources of information, avoid risks, and then act effectively on the basis of such information. Long periods of concentration on a single task are likely to erode one's ability to direct attention effectively, resulting in attention fatigue . The signs of directed attention fatigue include difficulty in concentrating, increased irritability, and increased rate of errors when performing tasks that require concentration [46]. The state of directed attention fatigue impairs one's ability to effectively manage surrounding resources and daily demands, which can have devastating consequences. Furthermore, fatigue makes people more susceptible to stress, even if the initiators of stress and directed attention fatigue are different. Consequently, urban dwellers tend to accumulate mental fatigue and exhaustion, which can be released in green environments [10,47,48]. Increasing physical activity, increasing social contacts, reducing stress and paying attention to personal restoration, and reducing exposure to urban environmental stressors such as noise, air pollution, and heat load are some of the possible pathways that link restoration to green environments [49]. The greater the amount of greenery and the closer it is to the built environment in a way that does not endanger subjects, the greater the restorative effect of greenery [50,51]. However, even brief visits to small urban parks can help with physiological restoration [43,52]. Furthermore, some urban environments, such as monasteries, small gardens and green boulevards, and well-maintained city centers, may encourage some degree of restoration [10]. Personal characteristics may also influence people's coping styles with urban environmental nuisances and restoration in green environments. Korpela and Yelen demonstrated that people of various ages and health levels experience varying degrees of restoration in green environments [53]. Pasanen et al. demonstrated that people who perceive themselves closer to nature, experience higher levels of restoration when visiting green environments [54]. Von Lindern et al. demonstrated that forest professionals experience lower levels of restoration while visiting forests [55], and [56] demonstrated that children who work in farms experience lower levels of restoration in parks [56]. Differences in coping styles with urban and restorative environments among ethnic groups may be related to lifestyle characteristics, exposure to discrimination, and, possibly, physiological differences [57]. Diet, level of activity, and clothing are all examples of lifestyle. In terms of diet, some ethnic groups follow a healthy diet consisting of whole grain meal, fish, fruits, and vegetables, while others follow a less healthy diet consisting of red meat and white grain meal [39]. Diets may also differ in terms of sugar, salt, and fat content, which may affect the risk to health [58]. Different ethnic groups may engage in varying levels of activity, which affects their metabolic rate and, as a result, their health risks [59]. Clothing insulation may have an impact on humans' ability to deal with a thermal load [60,61]. The amount of thermal insulation a person wears has a significant impact on thermal comfort due to its effects on heat loss and, as a result, thermal balance. Layers of clothing prevent heat loss and can either help a person stay warm or cause overheating [59]. Dressing styles, on the other hand, are influenced by ethno-cultural traits that, in some cases, transcend the instinct of adjusting to the weather [61]. In terms of discrimination, several studies conducted in the United States suggest that ethnic discrimination may be associated with decreased ability to cope with stress, resulting in less favorable heart rate variability [62]. They also claim that being subjected to discrimination leads to a decrease in self-esteem and an increase in shyness, both of which increase stress and risk to health [62]. However, few experimental studies empirically support the association between ethnic discrimination and an increase in health risks [57,63,64]. Wagner et al. studied the effects of self-reported racism on heart rate variability and confirmed Hoggard's results that HF levels were significantly associated with exposure to ethnic discrimination, while LF levels were not. There is some empirical evidence for ethnic related differences in physiological responses in coping with environmental nuisances. Wagner et al. , on the other hand, concluded that genetic effects on coping with environmental nuisances require more evidence. Recently, Saadi et al. demonstrated some ethnic differences in the functioning of the autonomic nervous system in response to environmental challenges, presumably due to physiological differences between Arabs and Jews [42]. In addition to these differences among ethnic groups, deprived ethnic groups may have less access to green environments, either due to a lack of supply of green environments or cultural restrictions on movement to green environments. According to Saadi et al. , Arab women lack access to green environments due to a lack of nearby parks as well as cultural restrictions on women's freedom of movement [42]. Almost all of the preceding studies concentrated on the physiological and emotional effects of environments on ethnic groups' sense of wellness and health risk. However, almost no research has been conducted to investigate the ethnic related differences of the effects of green environments on the cognitive aspects of restoration. Attention restoration theory [65] describes a potential mechanism for recovery within a restorative environment. Relaxing in an engaging environment is beneficial in allowing attention capacities to recover from directed attention fatigue. Restorative environments should have four characteristics in order to fulfill this restorative function: being away , fascination, coherence, and compatibility [66]. In terms of the first of these characteristics, staying in restorative environments provides people with a sense of being away from daily routines that require directed attention. Forests, mountains, sea sides and meadows are all ideal places to get away in addition to historical sites, museums, etc. that may stimulate the sense of escape. Concerning the second characteristic, fascination, the mechanism of fatigue recovery involves a different type of attention, one that does not necessitate any effort on the part of the person. This 'effortless' attention, also known as involuntary attention, is likely to resist directed attention fatigue and is central to a restorative experience. The natural environment is endowed with fascinating objects that encourage forms of exploration and keeps the person perceiving them interested without requiring active effort on their part. The other two characteristics are coherence, which represents a rich and understandable environment that stands on its own, and compatibility, which represents a person's ability to immerse in their surroundings. They work together to create space for further exploration, assisting a person's actions to match what the environment requires and can provide. Even brief encounters of this kind appear to have a restorative effect. Nature forms that can provide restorative benefits to an individual do not have to be more than a few trees or some indication of vegetation. Outdoor benches, for example, in a shady location, can encourage people to take a peaceful break. Outdoor sites with places to walk or opportunities to observe flora or wildlife may also present meaningful restorative benefits. The studies that tested a variety of urban and natural environments concluded that park restoration is proportional to the amount of greenery and green space as well as the degree of closure provided by built environments without undermining the sense of security in green areas [50,51]. Aside from these criteria, studies show that historical sites, museums, and restaurants, among other places, can be experienced as restorative environments [10]. However, more research on the restorative effects of green environments on human cognitive functioning is still needed. [50,51] We hypothesize that 1. Parks are much more restorative compared to urban environments regardless of the level of activity in them; 2. Ethnic groups may differ in their level of restoration in different environments; and 3. Ethnic minorities may be more affected by social environmental factors and less by physical environmental factors as compared to participants of the ethnic majority. --- Materials and Methods --- Study Population and Location The study's population consisted of 72 young women; 48 were Arab women who were the core population for the original study and 24 were Jewish women who were also participating in the original study as a comparison group. They were mothers between the ages of 20 and 35 with similar BMI and middle-class affiliation. They were selected using a convenient sample from two small mono-ethnic towns in northern Israel. All were in good health, did not smoke, and did not take any medications or drugs on a regular basis. The study area comprised 2 small towns, with a population of approximately 70,000 inhabitants each-the Arab town of Nazareth and the Jewish town of Afula. The towns are about 12 km apart in Israel's northern peripheral region, in a Mediterranean climate . The towns are also inhabited by mono-ethnic groups to avoid a confounding inter-ethnic stresses in the study area. Testing density, morphology, and greenery in the studied sites in both cities based on photos, show no significant changes in the main characteristics of the environment between 2016, the year of the original study and 2021. --- Materials and Methods --- Study Population and Location The study's population consisted of 72 young women; 48 were Arab women who were the core population for the original study and 24 were Jewish women who were also participating in the original study as a comparison group. They were mothers between the ages of 20 and 35 with similar BMI and middle-class affiliation. They were selected using a convenient sample from two small mono-ethnic towns in northern Israel. All were in good health, did not smoke, and did not take any medications or drugs on a regular basis. The study area comprised 2 small towns, with a population of approximately 70,000 inhabitants each-the Arab town of Nazareth and the Jewish town of Afula. The towns are about 12 km apart in Israel's northern peripheral region, in a Mediterranean climate . The towns are also inhabited by mono-ethnic groups to avoid a confounding inter-ethnic stresses in the study area. Testing density, morphology, and greenery in the studied sites in both cities based on photos, show no significant changes in the main characteristics of the environment between 2016, the year of the original study and 2021. --- Instruments and Measurements --- Independent Variables Noise measurements were collected using a Quest Pro-DL dosimeter. They ranged from 40 to 110 dB, and their resolution was 0.1 dB. The data were transformed to 110 dB in the few cases where noise levels exceeded 110 dB. During the 35-min visit to each environment, measurements of average noise were taken every minute. The device was worn by the researcher who followed the participants. The data were collected in a systematic manner, saved in a data logger, and transferred to a laptop. For each environment, mean measurements were computed. The heat index was calculated based on temperature, relative humidity and radiation temperature and wind velocity and direction measurements. The temperature and relative humidity were measured using a Fourier Microlog and Kestrel 3000 devices. Radiant temperature measurements were obtained from the Israeli Meteorological Service and adjusted to account for the amount of shade in each environment. The researcher calculated --- Instruments and Measurements --- Independent Variables Noise measurements were collected using a Quest Pro-DL dosimeter. They ranged from 40 to 110 dB, and their resolution was 0.1 dB. The data were transformed to 110 dB in the few cases where noise levels exceeded 110 dB. During the 35-min visit to each environment, measurements of average noise were taken every minute. The device was worn by the researcher who followed the participants. The data were collected in a systematic manner, saved in a data logger, and transferred to a laptop. For each environment, mean measurements were computed. The heat index was calculated based on temperature, relative humidity and radiation temperature and wind velocity and direction measurements. The temperature and relative humidity were measured using a Fourier Microlog and Kestrel 3000 devices. Radiant temperature measurements were obtained from the Israeli Meteorological Service and adjusted to account for the amount of shade in each environment. The researcher calculated the mean physiological equivalent temperature index [67] for each environment. Carbon monoxide was measured using a CO sensor attached to a Dragger Pac III, a portable device that detects changes in electrical potential during oxidation. The data were collected in a data logger from which the mean results were calculated for each person in each environment. The accuracy of the devices was determined using the calibration methods provided by the manufacturers and through comparisons with results from the Ministry for the Protection of the Environment's permanent station [52]. --- Dependent Variable The Potential Restorative Scale , as proposed by Hartig et al. in his revised version, was assessed using a perceived restoration scale questionnaire, which contains 26 statements and measures the subject's experience of the environment's restorative qualities. The 26 statements were rated on a 7-point Likert scale of agreement ranging from 0 to 6 . The psychometric properties of this scale have been previously reported [68]. The statements were grouped into four subscales: 1. being away is measured with five items , 2. Fascination is measured with eight items , 3. Coherence is measured with four items . The remaining items measured compatibility . A total average per question was calculated for each subscale and for the total perceived restoration scale . The mean score for each subscale and the final mean score were calculated. The PRS and the social mediators' inventory were administered to the participants in each station. They were sitting on a bench or on the grass and filed the questionnaires during their stay in the station. --- Mediating Variables Environmental and sociodemographic indices were used as mediating variables. The environmental measurements were collected in situ, where the participant stayed during the tests. The sociodemographic characteristics were derived from a pre-experiment questionnaire [43]. --- Procedure The field experiment was divided into 12 sessions, with 6 participants each. Both the Arab and Jewish groups went to Nazareth, an Arab town, and Afula, a Jewish town. The women visited three ordinary outdoor environments in each town : the busy town center. The pictures were taken during quiet hours but the tests were performed when the center was highly crowded. In Nazareth transportation is moving slowly in the rush hour and drivers intensively used horns. In Afula the traffic was intensive but with no blocks; in both cities the houses were populated by retail stores on the first floor and residential uses on the upper floors; a quiet residential neighborhood of 3-4 stories with about 10% greenery in Nazareth and 6-8 stories with approximately 20% of space covered by greenery in Afula, in both neighborhoods the streets were quiet as the transportation was mainly local; the towns' main parks, both similar in size and vegetation cover. Schnell and Saadi reported that the parks were structurally similar, with a similar balance of closure and openness. The dominant installation in the center of the parks were playgrounds for young children and in between the grass and the trees were surrounded by pedestrian roads and benches . Prior to the outdoors procedure the women were monitored in their homes with their children but not with their husbands. The town centers of Nazareth and Afula are both crowded, with Nazareth's center being noisier and less green. The residential streets chosen in both towns were quiet. The parks were roughly the same size and were covered in trees and other greenery that provided natural shade. To eliminate the effect of the order of sites in the field experiment, the order of visits was randomly assigned into 12 sessions of 6 women each . The sessions took place between January 2015 and February 2016. The sessions began at 11:00, allowing for a 1-h familiarization practice with the devices. The trials were held between 12:00 and 18:30 and consisted of visiting 4 intra-ethnic and 3 inter-ethnic sites 35 min each. were roughly the same size and were covered in trees and other greenery that provided natural shade. were roughly the same size and were covered in trees and other greenery that provided natural shade. To eliminate the effect of the order of sites in the field experiment, the order of visits was randomly assigned into 12 sessions of 6 women each . The sessions took place between January 2015 and February 2016. The sessions began at 11:00, allowing for a 1-h familiarization practice with the devices. The trials were held between 12:00 and 18:30 and consisted of visiting 4 intra-ethnic and 3 inter-ethnic sites 35 min each. For each session, the same protocol was followed. The measurement devices were calibrated and tested for accuracy prior to each session, in accordance with the manufacturer's instructions . The researcher visited the participants at their homes the day before each session, walked them through the informed consent form, completed an initial socio-demographic questionnaire, explained the research protocol, and demonstrated how the devices worked. On the day of the session, around 11:00 a.m., the participants operated the measurement devices on their own and completed the questionnaire . The participants were transported to their first station by the researcher. The researcher moved between the six participants in the group to ensure that the devices were operational and properly placed, and that the questionnaires were completed in accordance with the instructions and protocol. The questionnaire-based tests were administered by the researcher first at each woman's home and then again at each of the 6 outdoor stations while sitting 10-20 m apart in the shade. The memory tests were also individually administered during this interval. Environmental measurements were taken in each outdoor site using three devices worn by the researcher accompanying the women. . Prior to each site visit the women remained in an air-conditioned car for a 15min 'washout' period. The researcher provided assistance as needed and documented any unusual events that could have an impact on the quality of the data collected. The Tel Aviv University Ethics Committee in Israel granted ethical approval for this study. Before the experiment began, each participant provided informed consent by signing the appropriate forms at her home. These forms provided a thorough explanation of For each session, the same protocol was followed. The measurement devices were calibrated and tested for accuracy prior to each session, in accordance with the manufac-turer's instructions . The researcher visited the participants at their homes the day before each session, walked them through the informed consent form, completed an initial socio-demographic questionnaire, explained the research protocol, and demonstrated how the devices worked. On the day of the session, around 11:00 a.m., the participants operated the measurement devices on their own and completed the questionnaire . The participants were transported to their first station by the researcher. The researcher moved between the six participants in the group to ensure that the devices were operational and properly placed, and that the questionnaires were completed in accordance with the instructions and protocol. The questionnaire-based tests were administered by the researcher first at each woman's home and then again at each of the 6 outdoor stations while sitting 10-20 m apart in the shade. The memory tests were also individually administered during this interval. Environmental measurements were taken in each outdoor site using three devices worn by the researcher accompanying the women. . Prior to each site visit the women remained in an air-conditioned car for a 15-min 'washout' period. The researcher provided assistance as needed and documented any unusual events that could have an impact on the quality of the data collected. The Tel Aviv University Ethics Committee in Israel granted ethical approval for this study. Before the experiment began, each participant provided informed consent by signing the appropriate forms at her home. These forms provided a thorough explanation of the research's goals and objectives, as well as the experiment's procedure. The researcher explained all of the measured indices in the participant's native language . The questionnaires below were filled out by participants in their native language. --- Statistical Analyses In total, 504 PRS measurements were taken . To evaluate grouping level by women and the applicability of a multilevel model, the study evaluated the intraclass correlation coefficient . The ICC was greater than 0 . Even though the magnitude of the ICC is modest, we determined that multilevel modeling was the most appropriate method to adopt. The variance subscale for the Level 2 intercept was significant. The analysis began by calculating the effects of the environment and affiliation on the PRS. Following that, a stratified model was used to focus on differences in modes of dealing with environmental challenges based on ethnic affiliation . We calculated ANOVA between the affiliation groups and environments on the one hand and PRS on the other. When the mediation analysis assumptions are met, the mediation proportion presented is the proportion of the change in mean PRS attributable to elevated levels of potential environmental nuisances [69]. Mediation analysis was performed to calculate the percentage of the association between environments and PRS, as explained by each of the mediators. --- Results Before we present the results, we present the reliability of the partial and overall indices of the PRS. The reliability of the perceived restoration scale in our case study is presented in Table 1. Each of the subscales of the test is highly reliable, with alpha Cronbach values between 0.90 and 0.93. This is compared to the internal reliability alpha of 0.79 that has been previously reported [68]. The PRS total and subscales means as related to ethnic and environmental factors are presented in Table 2 and Figure 6 and the ANOVA F values by environmental and ethnic factors are presented in Table 3. In line with the first hypothesis, highly significant differences among environments were recorded. Extreme differences were recorded, in particular, between parks and urban built environments. In the total sample, mean PRS values in built outdoor environments reached levels of approximately one or less, and in parks, levels of 5.6 were recorded . In contrast, the difference in PRS values between busy city centers and quiet residential areas was less than 1 . Low, albeit significant differences in the PRS results, especially in coherence and compatibility in the park restorativeness, were found. The PRS scale also distinguished between experience at home and at the park, identifying the park as the most restorative environment both for Jews and Arabs . In respect to the second hypothesis, it appears that Jewish women enjoy slightly higher levels of PRS mean scores, mainly at home and at residential environments, while Arab women feel stronger level of restorativeness in central city environments. They also differ in PRS subscales scores. While Jewish women experience stronger sense of being away and fascination Arab women experience stronger sense of coherence. . Table 3 shows that the differences between Arabs' and Jews' sense of restorativeness are significant for the total mean PRS and "being away" subscale only. However, the interaction between environment and ethnicity explained between 17-53 percent of the variability . In respect to the second hypothesis, it appears that Jewish women enjoy slightly higher levels of PRS mean scores, mainly at home and at residential environments, while Arab women feel stronger level of restorativeness in central city environments. They also differ in PRS subscales scores. While Jewish women experience stronger sense of being away and fascination Arab women experience stronger sense of coherence. . Table 3 shows that the differences between Arabs' and Jews' sense of restorativeness are significant for the total mean PRS and "being away" subscale only. However, the interaction between environment and ethnicity explained between 17-53 percent of the variability . Post-Hoc environmental factor: all differences are significant at level of 0.0001 except for intra-and inter-ethnic affiliation in residential environments. The differences between the environments were highly significant, owing primarily to the most pronounced difference between mean total PRS in parks and the rest of the environments .The comparison of the ethnic related restorative experience in the alien environments between the two groups reveals significant differences . Both Jewish and Arab women experience about the same reduction in PRS score while reaching out to alien inter-ethnic environments. Among Arab women, the differences are much more pronounced. In testing the effect of the mediating factors on the distribution of PRS scores for all environments in line with the third hypothesis, we applied multiple-regression analyses. The independent variables were physical aspects of the environment: exposure to thermal, noise and carbon monoxide loads; exposure to social environmental variables: social discomfort, sociodemographic status, status in the family, freedom of movement to parks and access to gardens. Four variables had a significant effect on PRS in the following order: social discomfort, noise, thermal load and last entered was ethnic affiliation . Table 6 compares the effect of the mediating variables on Arab and Jewish women's PRS scores. Both regressions presented strong correlation coefficients between the mediators and PRS scores . Four variables affected PRS scores in both ethnic groups. However, among Arabs, social discomfort and noise were the stronger mediators, while among Jews, thermal load was the main mediator. --- Discussion The sense of restoration as experienced in urban environments has been previously investigated [3][4][5][6]12,70,71]. Our study, addressing the intricate effect of ethnicity and environmental factors on the perceived restoration builds on previous research using PRS indices and adds to the discussion by focusing on ethnic differences in experiencing restoration in urban environments. The methodology employed in the present investigation ascertained the similarity of both residential and park environments in the intra-and interethnic sites. Parks in Jewish and Arab towns are similar in size, greenery and shade, facilities, and maintenance levels-factors that may influence park restoration [50,53,72]. Therefore, the effect of social factors may be uniquely identified in analyzing ethnic differences in response to the studied environments. In accordance with our first hypothesis, we demonstrate that urban parks are highly effective environments for sense of restoration. Several studies have broadly confirmed this conclusion [73,74]. Furthermore, we argue that built environments may have perceived restorative effects on humans [75]. Our findings suggest that ordinary urban environmentsresidential and city centers-promote very low levels of restoration. This is consistent with previous literature [24,71] It does not, however, rule out the possibility of specific urban built environments gaining restorative power. demonstrated that restoration in urban environments is directly associated with the quality of architecture and design of the environment [75]. Furthermore, particular urban environments may exhibit high restorative effects. For example, panoramic, historical, and recreational places were found to stimulate a sense of restoration [76,77]. Similarly, museums and churches appear to contribute to a sense of restoration [65]. This discussion's main conclusion is that the restorativeness of urban environments is determined by their function, aesthetic qualities, and amount of greenery. According to our second hypothesis, we find significant, albeit minor, differences in Jewish and Arab women's sense of restoration in urban and green environments. Although it appears that to some extent that Jewish women feel stronger sense of restoration compared to Arab women, the interaction between ethnicity and restoration was most notable as related to the PRS subscales of fascination compatibility and coherence. This is not surprising given that social factors from which minorities tend to suffer more, appear to have a strong influence on human stress and health risks [78,79]. Our results suggest that while visiting parks Jewish women experience more fascination whereas Arab women enjoy a greater sense of being away. It is conceivable that Parks, for Arab women are first and foremost an "escape" from the parental stress they experience at home [42]. We previously reported that because they arrive at the park in groups, oftentimes with their children, they do not have time to be captivated by the park's natural scenery [80]. In terms of PRS, Arab and Jewish women face similar feelings in crossing ethnic boundaries. They do, however, differ in their perceptions of the various subscales of PRS. Arab women lose their restorativeness in all aspects in Jewish parks, whereas Jewish women lose their sense of coherence and compatibility in Arab parks. Schnell and Saadi exposed the strong effect of visiting Jewish parks on Arab compatibility [80]. The findings of this aforementioned report suggest that Arabs visit Jewish parks because there are few parks in Arab towns. They testify that they come in large groups and demonstrate their Arab identity by listening to loud Arab music and behaving extrovertly. As a result, they signify the park with their identity as Arabs. As a result, Jews are offended by such behaviors and develop negative attitudes towards the Arabs' presence in the parks. Negative reactions to minorities visiting majority parks have been documented in other cases [73,[81][82][83][84][85][86]. In accordance with the third hypothesis, Arab and Jewish women also differ in their sensitivity to the mediating variables. In general, two physical factors, noise, and thermal load, and two social factors, sense of social discomfort and ethnic affiliation, accounted for more than 70% of the variability in PRS. While Arab women were more sensitive to social discomfort and noise, Jewish women were more susceptible to thermal load. Arab women's sensitivity to noise is understandable since Arab homes and outdoor environments in Nazareth reached average values of 73 dB. The average noise level at home and in the city center reached 90 dB. In comparison, the average noise level in Afula reached 63 dB. The average levels of noise at home reached 49 dB, and in the city center, noise of 71 dB was documented. With a standard threshold of 65 dB, Arab women are exposed to stressful noises in all visited environments, including homes, whereas Jewish women are mostly exposed to noise levels that are below threshold. Arabs' sensitivity to social discomfort may reflect their perception as a disadvantaged minority in Israel. A similar trend has been identified in the effects of social discomfort on HRV in studies performed in the US and Israel [38,63]. Although all PRS scores in built-up environments are low, they are even lower in Arab environments. These differences in PRS scores in built-up environments possibly reflect the chaotic structure and the lower maintenance of Arab towns in Israel when compared to Jewish towns. One of the participants commented "In neighboring Jewish towns, even the cemeteries look nice with gardens, unlike our chaotic towns". However, a more detailed study that focuses on the environmental factors that affect restoration is required in future studies. --- Limitations The study has the following limitations. While gender and age may influence human responses to environments, our study focuses solely on young females. More research is needed to test a broader range of ages, as well as males. The study could also benefit from testing the restoration effect of various other urban built-up environments. Similarly, it is advisable to test the differential restorative effect of parks with different structures. --- Conclusions This study confirms that green environments are restorative, whereas ordinary urban environments, with the exception of a few unique built-up sites, are much less so. It appears that the restoration provided by greenery, as experienced by healthy women and reflected by a reliable questionnaire, transcends ethnicity. It is worth noting that the cognitive processes that contribute to perceived restoration differ significantly when ethnic boundaries are crossed and different environmental factors are present. Arabs are more affected by factors such as noise and discrimination, whereas Jews are primarily affected by thermal load. The strong effect of noise on Arab women is explained by the Arab environments being much noisier than Jewish environments. At the same time, Arab women as a deprived minority, are more sensitive to crossing ethnic boundaries. Furthermore, it appears that the park restorative effect is ethnic specific as Arab women response is attributed mainly to the experience of "being away", from the stresses at home whereas in Jewish women the restorative response is attributed mainly to fascination from the park environment. --- Data Availability Statement: Data can be available by direct request from the corresponding author. --- --- Author Contributions: D.S. was involved in initiating the study, collected the data and operated the devices, led the field experiment, designed the questions and the first version of the manuscript. She performed the analysis, analyzed the data and was accountable for all aspects of the work. I.S. led the initiation of the study and the design of the research, revised the first draft of the manuscript and revised the manuscript critically for important intellectual content. E.T. was involved in developing the main project, led the interpretation of the aspects concerning health implications and revised the manuscript critically for important intellectual content. All authors participated sufficiently in the work, took public responsibility for appropriate portions of the content and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors have read and agreed to the published version of the manuscript. Funding: No external funding. --- Institutional Review Board Statement: The study was approved by the university of Tel Aviv ethical committee. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Throughout the last few decades, plenty of attention has been paid to restorative environments that positively affect human psychological health. These studies show that restorative environments affect human beings emotionally, physiologically, and cognitively. Some studies focus on the cognitive effects of exposure to restorative environments. A widely used index that measures the cognitive response is the Perceived Restoration Potential Scale (PRS). Most studies employing the PRS have examined differences in human cognitive response between types of urban environments mainly urban versus green ones. We use Hartig's questionnaire to expose differences between types of urban environments and ethnic groups. Variances between Arab and Jewish women were calculated in four environments: home; park; residential and central city environments. The effect of intervening variables such as exposure to thermal, noise, social and CO loads and social discomfort were tested. We find that dissimilar to urban typical built-up environments, green areas are highly restorative. Furthermore, differences in the restorativeness of different urban environments are low though significant. These differences depend on their function, aesthetic qualities, and amount of greenery. Ethno-national differences appear to affect the experience of restoration. While both ethnic related groups experienced a tremendous sense of restoration in parks, Jewish women enjoyed slightly higher levels of restoration mainly at home and in residential environments compared to Arab women who experienced higher sense of restorativness in central city environments. Jewish women experienced higher sense of being away and fascination. From the intervening variables, social discomfort explained 68 percent of the experience of restoration, noise explained 49 percent, thermal load explained 43 percent and ethnicity 14 percent of the variance in PRS.
Introduction Inclusion in general educational frameworks, including in general physical education settings, has been implemented globally in Sweden [1], South Korea [2], Greece [3], Japan [4], and Saudi Arabia [5]. Inclusion is the educational philosophy of teaching students with disabilities together with their peers without disabilities in the same classes and meeting the necessary and essential requirements for all students to be successful [6]. Therefore, because of this global movement to promote inclusion, the number of students with disabilities attending inclusive physical education classes has been increasing year on year [2]. Indeed, UNESCO stated that students with disabilities must be allowed to join inclusive, safe, and adapted physical education classes [7]. Although such educational policies have been implemented in general schools widely, several factors play a critical role in the success of such initiatives [8,9]. Positive attitudes among stakeholders, such as students without disabilities, is an essential element to achieve successful inclusion in physical education [10][11][12]; negative attitudes remain the main obstacle in facilitating successful inclusion [13]. To explain, negative attitudes towards inclusion tend to cause students without disabilities to avoid interaction with peers with disabilities and disrupt such interactions [8]. Attitudes are defined as the emotional tendency of a person towards others, places, tasks, and objects [14]. Attitudes can be understood from a social learning perspective as the result of the interactions among individual-, environmental-, task-, and activity-related factors that all determine a person's behavioural motivations, which then affect their outward behaviours [15]. The attitudes of students without disabilities towards students with disabilities are viewed as crucial in influencing the attitudes towards the extent to which inclusion is achievable in a particular educational context [16]. Students without disabilities have a crucial role to play in facilitating the acceptance, dignity, and respect of their peers with disabilities; having positive attitudes towards peers with disabilities appears to bolster students without disabilities' intentions to seek to interact through play with their peers with disabilities [17]. Nonetheless, in non-Western societies, especially in Arab-speaking countries, limited attention has been paid to the roles of other significant stakeholders in terms of their attitudes toward inclusion and their attitudes towards peers with disabilities [18,19]. Understanding the attitudes of students without disabilities toward the inclusion of students with disabilities will help to address the obstacles and difficulties related to achieving successful inclusive practices [20]. Therefore, investigating the attitudes of students without disabilities toward their peers with disabilities is crucial to support successful and effective inclusion practices [21,22]. Many published studies on inclusion in physical education settings, e.g., [18,19,23,24], showed that inclusive education provides a range of benefits for students with and without disabilities. These include opportunities to learn and practice social skills, understand the importance of practicing physical exercise to promote physical fitness and motor skills, develop relationships with one another no matter the level of ability, and behave appropriately to one another [24][25][26][27]. Specifically, with respect to students with disabilities, Kodish et al. [28] found that students with disabilities did not influence the time their peers without disabilities spent on physical activity when both were educated in the same physical education classes. Additionally, as a result of such inclusion, students without disabilities learned to interact with and approach students with different abilities [29], understand student's strengths and weaknesses [30], became more aware of the needs of students with disabilities [27], and learned more about their peers with disabilities [19]. Thus, to ensure that students with disabilities successfully attain meaningful experiences in their physical education classes, it is essential to secure the contribution of their peers without disabilities to facilitate a positive and productive context for learning [31]. There is a recently developed body of research examining students without disabilities' attitudes towards the inclusion of peers with disabilities and the factors associated with such attitudes [9,32]. For example, a study found that students who had contact with a peer with a disability or had a family member with a disability had more positive attitudes towards students with disabilities compared to students who were simply attending an inclusive class who did not have any direct contact with peers with disabilities or had a family member with a disability [33,34]. However, the opposite was found in studies where students were attending general schools: participants without disabilities reported more positive attitudes toward their peers with disabilities than those attending inclusive schools [11,35]. This indicates that placing students with and without disabilities in the same classroom does not necessarily guarantee more positive attitudes toward peers with disabilities. Macmillan et al. [36] found in a review of the literature that out of 35 studies analysed, 22 found positive attitudes following contact with people with disabilities. The contact-attitude relationship was negative in only two studies. Moreover, Armstrong et al. [37] showed that the quantity of contacts exceeds the frequency of contacts, with positive attitudes being more related to quality contacts with peers with disabilities rather than having frequent contact [38]. Gonçalves and Lemos [33] examined the attitudes of Portuguese students without disabilities toward students with disabilities. Their findings showed that female students had more positive attitudes than males toward their peers with disabilities. Moreover, a systematic review by Hutzler [39] reported that gender was the most significant predictor of attitudes toward inclusion where females expressed more positive attitudes toward their peers with disabilities than males. However, the impact of gender was not reported by Arampatzi et al. [35]. In Spain, Rojo-Ramos et al. [40] reported significant differences between the two research cohorts; females and students from rural schools expressed more positive attitudes toward their peers with disabilities in physical education classes. Additionally, because the type of school may play a role in students' attitudes towards their peers with disabilities, it is important to examine the influence of this variable. Alnahdi's [20] study illustrated that Saudi older students without disabilities expressed more positive attitudes towards their peers with disabilities than younger students; having a relative with disabilities did not affect such attitudes. Another study, however, reported that the age of students only had a small effect on the attitudes of students without disabilities toward the inclusion of students with disabilities [41]. Additionally, the grade of students without disabilities was not significantly associated with their attitudes toward the inclusion of students with disabilities [42]. Even though there has been increasing global attention paid towards improving educational practices to benefit students with disabilities, few studies have investigated the attitudes of non-Western students students without disabilities towards their counterparts with disabilities in physical education contexts, as well as the factors that underpin these attitudes, leading to the following questions: Do Saudi students without disabilities generally have positive attitudes toward their counterparts with disabilities? What exogenous factors predict the attitudes of primary school students without disabilities toward their peers with disabilities in physical education? This study aimed to investigate the general attitudes of Saudi students without disabilities towards their peers with disabilities and examine the linkages between the eight target research variables and students' attitudes toward their peers with disabilities in the physical education framework. Such baseline data are crucial to enable Saudi Arabian educational stakeholders to develop and modify inclusive education interventions and policies to achieve full inclusion for students with disabilities in physical education settings. Finally, it is hoped that the findings will provide novel empirical data to facilitate inclusive educational practices at the local level. --- Materials and Methods --- --- Procedure After obtaining ethical approval from the Research Ethics Committee at King Faisal University , the participants were invited to join the study. The informed consent forms were signed by parents or legal guardians after reading the information sheet that described the aims of the study; instructions for completing the questionnaire were also provided. The questionnaire was created using Google Forms so that the participants could complete the questionnaire online by clicking the URL link. The participants were informed of their right to withdraw from the study at any time. All data were recorded anonymously. --- Instruments 2.3.1. Demographic Form Data on the following eight factors were collected from participants: gender, age, type of school , school location , whether they had a family member with a disability, whether they had a friend with a disability, whether they had a classmate with a disability, and whether they have played with a person with a disability. --- Students' Attitudes Scale To examine the students without disabilities' attitudes toward their peers with disabilities in physical education classes, the Scale of Attitudes toward Students with Disabilities in Physical Education-Primary Education was used [43]. The scale consists of four items: I prefer not to interact with people with disabilities; I would avoid doing classwork with a person with a disability; I would prevent a person with a disability from joining my team; I would not propose a person with a disability as captain of my team. A five-point Likert scale scoring system was used for all four items; it ranged from 1 to 5 , and the sum of the four items was taken as the SASDPE-PE score, which reflects the participants' overall attitudes toward their peers with disabilities. Because the four items were negatively worded, a higher score indicates more positive attitudes by students without disabilities toward their peers with disabilities. As the SASDPE-PE scale is only available in Spanish [43], and the participants in the present study only speak Arabic, it was necessary to translate it into Arabic . The translation process was performed using the bilingual method [44] by three bilingual physical education professors. The Spanish items were translated based on the meaning of each item rather than verbatim. The accuracy of the Spanish and Arabic versions was then compared, and the necessary changes were made. No item was removed during the translation process. --- Reliability and Validity The construct validity of the SASDPE-PE scale in Arabic was examined via principal component analysis and exploratory factor analysis; reliability was examined using Cronbach's alpha. The SASDPE-PE scale was validated for students aged 9-12 from the original Spanish version using a sample of 87 students who have the same characteristics as the current participants. The results indicated that the Kaiser-Meyer-Olkin sample adequacy measure was 0.782 and the Bartlett test sphericity score was statistically significant, indicating that commonalities were >0.30. The item scores were then subjected to exploratory factor analysis, which identified a single component with 72.088% of the total variance explained, confirming that the four items represented a one-dimensional construct. The load factor analysis indicated that all items in the domain made a significant contribution . The Cronbach's alpha results indicated good reliability [45]; the correlational analysis found moderate and significant correlations for all items with no overlap [46]. Thus, the Arabic version of the SASDPE-PE scale was valid and reliable for assessing attitudes of Saudi students without disabilities toward students with disabilities in physical education settings. --- Data Analysis Indices of normality were investigated via the continuous variables included in the study. The variance inflation factors were used to test the multicollinearity problem; correlation analysis among all variables considered in the study was carried out, and dummy variables were created when necessary. To deal with the unequal sample sizes, the item score and overall attitudes score data were also tested for equality of variance using Leven's test. A one-way MANOVA was performed for subsequent comparisons between participants stratified by gender, age, type and location of school, having a family member with a disability, having a friend with a disability, or having a classmate with a disability, and having played with a person with a disability; Bonferroni's post hoc test was used to perform multiple comparisons [47]. A stepwise linear regression model was applied to analyse the association of the SASDPE-PE score with the significant independent variables in the first research question. Statistical analysis was performed using SPSS V.26 and p-values were set at 0.05. --- Results --- Attitudes toward Students with Disabilities in Physical Education A total of 972 students completed all phases of the study, including 488 boys aged M = 10.6 and 484 girls aged M = 10.6 . Most participants were studying at public schools and at urban schools, and 220 , 223 , 282 , and 247 of them were 9, 10, 11, and 12 years old, respectively. One-way MANOVAs were conducted to explore the group differences on the four items from SASDPE-PE and the overall score. Results showed that significant differences were found in the type of school and in those who had a family member with a disability or a friend with a disability . However, higher overall attitudes toward their peers with disabilities were only noted among participants attending public schools compared to participants attending private schools and among participants with who had friends with disabilities compared to those without. No significant difference was noted in the overall attitudes toward their peers with disabilities among participants who had a family member with a disability compared to those who did not. There were also no significant differences in any of the four SASDPE-PE items and in the overall score between participants stratified by gender, school location, having a classmate with a disability, or having played with someone with a disability. The MANOVA analysis also revealed significant age group differences in the overall attitudes of participants towards their counterparts with disabilities on several items . Specifically, compared to the other three age groups, 11-year-old participants reported less positive attitudes toward peers with disabilities in physical education classes in items 1, 3, and 4, and their overall scores; for item 2, 10-year-old participants reported more positive attitudes towards peers with disabilities compared to the other three age groups . Regarding the items on having a family member with a disability, having a friend with a disability, having a classmate with a disability, and having played with a person with a disability, most of the participants responded "no". Participants from public schools and those who had a friend with a disability were more likely to interact and do class work with a peer with a disability . Participants who had a friend with a disability also preferred to do class work with peers with disabilities . Finally, students in public schools without disabilities were more likely to appoint someone with a disability to lead their team than those in private schools . Note: Results are presented as mean ; NS = not significant; * < 0.05; ** < 0.01; *** < 0.001 significant differences between the groups; !! < 0.001 differs from the 9-year-old group; ¶ ¶ < 0.001 differs from the 10-year-old group; ♣♣ < 0.001 differs from the 11-year-old group; ♠ < 0.01 differs from the 12-year-old group. --- The Association of the SASDPE-PE Score with Independent Variables Stepwise linear regression was conducted with the only significant independent variables in the first research question, namely age, type of school, and having a family member or a friend with a disability. The analysis revealed that, on average, 11-year-old students scored -1.261 points on the SASDPE-PE scale compared to their 12-year-old peers . However, compared to being a private school student, being a public school student improved attitudes toward peers with disabilities by 1.478 points. Finally, having a friend with a disability also improved attitudes towards peers with disabilities by 0.670 points, with an overall R-squared of 5.9%. --- Discussion This study aimed to investigate attitudes of Saudi students without disabilities towards their peers with disabilities and the association between selected student-related variables and attitudes of students without disabilities towards their peers with disabilities. In general, the findings indicated that Saudi students without disabilities expressed positive attitudes towards their peers with disabilities. Significant differences were evident between the participants for some variables . Stepwise linear regression also revealed that age, school type, and having a friend with a disability were significantly associated with attitudes toward peers with disabilities, with an overall R-squared of 5.9%. This low percentage is expected and does not concern us given the situation of people with disabilities in Saudi Arabia. Indeed, although the state guarantees the rights of citizens and their families in times of emergency, illness, disability and old age, there has been a growing tendency to view disability through the medical model rather than social [5]. People with disabilities are still stigmatized by their family members who associate disability with a kind of powerlessness and lifelong dependence, so the person with disability is isolated at home, excluded from social gatherings, and sometimes forbidden from family visits [48,49]. Although significant differences between the participants in the current study were not evident for some of the selected student-related variables, others are of significant interest to researchers. For example, despite no significant differences existing between male and female students, boys reported more positive attitudes toward students with disability than girls. These interesting findings are inconsistent with most previous studies, e.g., [19,33,39]. In particular, Gonçalves and Lemos [33] examined Portuguese students without disabilities' attitudes toward peers with disabilities, and reported that female students expressed more positive attitudes than males. Nonetheless, the current study's finding was unsurprising because Saudi girls have limited experiences of physical education class: only in 2018 did Saudi Arabia start to provide physical education classes for girls in public schools. Therefore, this variable must be taken into account when examining female students' attitudes towards peers with disabilities in physical education contexts in Saudi Arabia. In terms of the variable age, although its effect is not consistent across the four items, the data showed that 9-and 10-year-old students express relatively more positive attitudes toward their peers with disabilities than 11-and 12-year-old students. Although there is no agreement in the literature in this regard [8], the current study's findings were consistent with the findings of Blackman's [41] and more recently Di Maggio et al.'s [50] studies. The latter reported that younger Barbadian and Italian students reported more positive attitudes toward peers with disabilities than older students. However, the current study's findings were inconsistent with those of Alnahdi [20], and more recently Li et al. [51], which reported that older students expressed more positive perspectives and attitudes toward the inclusion of their peers with disabilities than younger students. Nonetheless, several studies reported non-significant differences between students without disabilities' attitudes toward students with disabilities based on participant age [21]. As there is no consistency on the impact of age in students without disabilities' attitudes toward peers with disabilities, further investigation of this topic is required. In terms of the variable type of school, the current study's findings indicated that participants attending public schools expressed more positive attitudes toward peers with disabilities than those attending private schools. This may be accounted for by the different educational policies and curriculums of public and private schools which could play a critical role in students' attitudes toward disability. In other words, public schools in Saudi Arabia tend to implement the policies of local education authorities and follow standard curricula set out by local government; private schools, however, are more likely to implement international educational policies and curricula which may focus on specific subjects such as the sciences. It could be also a result of the interactions between students with and without disabilities in public schools compared to private schools because the former includes more students with disabilities than the latter. Nonetheless, in line with the current study's findings, a Saudi study by Al-Salim [52] indicated that public school students without disabilities reported positive attitudes toward their peers with disabilities in physical education settings. The findings on the variable school location highlighted that participants attending schools in rural areas demonstrated similar attitudes toward peers with disabilities as participants attending schools in urban areas. Although there is limited research on the differences between urban and rural students' attitudes towards peers with disabilities, a recent study by Rojo-Ramos et al. [40] found that Spanish students without disabilities attending rural schools reported more positive attitudes towards peers with disabilities in physical education settings than those attending urban schools. Moreover, in contrast to the current study's findings, Rojo-Ramos et al. [40] reported a significant difference between the attitudes of students without disabilities in rural and urban areas towards peers with disabilities. Although Rojo-Ramos et al. [40] did not provide an explanation for this finding, in the Saudi case , one potential reason that participants in rural areas report similar attitudes toward those with disabilities as those from urban areas is that the former, despite the reduced opportunity to meet or interact with many students with disabilities in their physical education classes or schools, conduct high levels of contact with their peers with disabilities [38]. Therefore, the current study's findings were inconsistent with previous research reporting that interaction with peers with disabilities is a significant predictor of positive attitudes among students without disabilities [12]. In fact, contrary to the findings of Majoko [53], the large number of students with disabilities attending physical education classes has been identified as a significant obstacle to successful inclusion. Alnahdi et al. [54] reported that the quality, rather than the duration, of exposure was found to be the most important factor affecting attitudes towards people with disabilities. Meaningful, close, and intimate engagement with people with disabilities is essential. These authors also suggest that early intervention is necessary to improve access not only quantitatively but also qualitatively. Villages are an agglomeration of human settlements often located in rural areas with a population of between 6000 and 15,000 inhabitants [55]. Therefore, it appears logical that class sizes in rural schools tend to be very small. Finally, the current study's findings illustrate that participants who had a friend with a disability expressed more positive attitudes toward students with disabilities than those who did not have a friend with a disability. Friendship between students without disabilities and those with disabilities has been identified as a significant predictor of the former's attitudes towards the latter. In line with the current study's findings, a study by Blackman [41] reported that having a friend with a disability positively impacted students without disabilities' attitudes toward their peers with disabilities. Campos et al. [56] indicated that students who reported having a close friend with a disability expressed their acceptance of having a classmate with a disability in their physical education classes. In support, the existence of a positive friendship between students with disabilities and peers without disabilities is more likely to benefit both groups in the physical education settings [19]. Although such friendship has a positive effect on students' attitudes toward disability, besides friendship, several elements may play a role in such attitudes. For instance, Olaleye et al. [57] indicated that the friendship factor did not affect boys' attitudes toward disability, but girls who have a friend with a disability reported more positive attitudes. Thus, these results might suggest that there is a complex and multifaceted relationship between friendship and attitudes toward disability [21]. Despite its merits, two limitations should be highlighted in this study. First, the samples involved in this study were of specific ages and from a single area of Saudi Arabia; therefore, the findings may not be generalized to students of different ages and from different areas of the country. More research is needed that collects larger samples of different ages from different areas of the country. Second, the current study employed a quantitative research approach via an online questionnaire; therefore, the participants did not have the opportunity to express their in-depth feelings and attitudes towards their peers with disabilities. Further research using qualitative research methods or a mixed-methods design would help to gather more comprehensive data about students without disabilities' attitudes toward disability. --- Conclusions The positive attitude of students without disabilities towards their peers with disabilities is one of the most significant factors in successfully supporting the implementation of inclusive physical education. In the current study, the Saudi students without disabilities expressed generally positive attitudes toward their peers with disabilities in physical education settings. These findings make us optimistic that the inclusion process in Saudi Arabia is moving in the right direction. In other words, the positive attitudes of students without disabilities towards the inclusion of students with disabilities provides a positive indicator of successful inclusion. However, these findings should be interpreted with caution because several other factors may also critically affect such attitudes , school location , and having a friend with a disability). Nonetheless, the findings contribute to the literature by providing baseline data on students without disabilities' attitudes toward their peers with disabilities, which may also help educational policy-makers in Saudi Arabia and Arab countries to enact successful educational policies and legislation more broadly that maintain pace with the rapid development of inclusion in physical education globally. --- Implications The findings of this study give rise to three main implications. First, the data show that the school is effective at increasing the likelihood that students with disabilities will be accepted by their peers without disabilities; therefore, it is crucial to expand the opportunities for such students to interact with their peers without disabilities. Those with disabilities also should be portrayed as integral members of society who simply perform certain activities differently and therefore should be included as much as possible in curric-ular and extracurricular activities; this will promote a better awareness and understanding of people with disabilities. Second, Saudi officials should consider expanding the inclusion of students with disabilities alongside their peers in public and private schools. All school leaders across the country are encouraged to promote ongoing inclusion-led school activities to provide all students with opportunities to meet and interact with their peers with disabilities to further the inclusion and acceptance of the latter in social activities. Third, conducting further studies that focus on different independent variables will help to better understand the relationships between individuals with and without disabilities across different age groups and promote positive attitudes towards those with disabilities from infancy through to old age. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Data Availability Statement: The data that support the findings of this study are available from the author upon reasonable request. ---
The attitudes of students without disabilities toward their peers with disabilities are considered an important determinant of successful inclusion in physical education settings. Nonetheless, there is limited research on this topic in non-Western societies, especially in Arab contexts. Thus, to address this paucity in the literature, this study aimed to assess the general attitudes of Saudi students without disabilities towards their peers with disabilities and examine the associations between selected student-related variables (e.g., gender, age, type of school, school location, having a family member or a friend or a classmate with a disability, and having experience of playing with a person with a disability) and attitudes of students without disabilities. A total of 972 students aged 9-12 years old (M age = 10.6; SD = 1.1; girls = 49.7%) completed the Arabic version of the Scale of Attitudes toward Students with Disabilities in Physical Education-Primary Education (SASDPE-PE). Data analysis indicated that, in general, participants reported positive attitudes toward their peers with disabilities in physical education classes. Despite boys being more likely to hold positive attitudes than girls, no significant difference between them existed. The results showed that 10-year-old participants reported more significantly positive attitudes than those in the other age groups. Participants attending public schools reported more significant positive attitudes toward their peers with disabilities in physical education classes compared to those attending private schools. Having a friend with a disability was linked to students without disabilities having positive attitudes towards their peers with disabilities. In contrast, having a family member or a classmate with disabilities and having played with a person with a disability were not related to such positive attitudes. The current study's findings have significant implications for inclusive educational practices.
Introduction Social connectedness is beneficial for older adults' sense of well-being and promotes successful aging by providing opportunities for social engagement and social support in times of need . Frequent contact and high emotional closeness with network connections facilitate support exchanges and promote the relationships' continuity . Furthermore, older people who are satisfied with the support and resources available through their network report low levels of loneliness and high overall quality of life . When explaining variation in the function and quality of networks, many gerontologists focus on the geographic proximity between older adults and their network members, a factor that sets the opportunity structure for their social interactions and support exchanges . Existing research has focused on older adults' local embeddedness and the availability of local connections in their networks, especially among family members such as children . Besides being available for companionship, connections located nearby can also react more quickly to the rising needs of older adults, especially when it comes to favors and assistance requiring physical co-presence . Nevertheless, many older adults also maintain connections over longer distances. Such diversity is part of the trend of networked individualism, where people's relationships shift from local, densely connected communities toward multiple far-reaching, loosely knit, and personalized networks . Adults in more recent cohorts have fewer children than their older peers-and these adult children are also more likely to live farther away from their parents to pursue career opportunities and build families . Likewise, other extended family members can also remain part of older adults' core networks, even when they are not living nearby . Non-kin connections could be more vulnerable to distance than family ties, as they rely more on continuous reciprocity and mutual satisfaction than familial normative expectations and obligations . Nevertheless, some non-kin relationships are more resilient to longer ranges than others, especially when their sentimental value and proven reliability outweigh the reduced reciprocity over longer distances . Thanks to technological developments, older adults can better sustain many connections over longer distances and enjoy more interactions, emotional support, and shared moments with them, which promotes a sense of engagement and belonging . Still, it is not clear how older individuals typically mix family and non-kin connections located at variant distances in their core networks, how these different geographic layouts associate with individual characteristics, and-most importantly-whether and how diverse geographic layouts link to variation in the function and quality of older adults' core discussion networks. These questions have gained significance in light of the higher prevalence and acceptance of long-range connections, increased options to keep in touch, diverse cultural traditions across European regions and varied individual characteristics. An up-to-date answer to these questions should contribute a unique perspective on which older adults are at risk of isolation and inform subsequent efforts to understand the implications of proximal vs. farflung connectivity for well-being in old age. The present research develops a typology dedicated to social networks' geographic layout among older adults in the European context, highlighting both families and nonkin connections beyond arm's reach. We further interpret the identified geographic structures in the context of older adults' social demographic characteristics. In addition, to reveal whether some geographic layouts function differently from others, we examine how they associate with one's network contact, closeness with network connections, and network satisfaction. We use data from the sixth wave of the Survey of Health, Ageing, and Retirement in Europe , which provides detailed information on the core networks of the aging population. --- Background --- Network typologies Building network typologies is a well-established approach to summarize complex social network contexts and experiences in the aging population. Instead of focusing on a single dimension of networks measured by a single variable, a typology approach considers multiple attributes of networks, such as one's network size, locality of connections, diversity of ties, frequency of contact with members, and one's participation in group activities. Existing typologies have consistently identified several types of networks, namely diverse, friend-focused, family-focused, and restricted networks. Older adults embedded in diverse networks with a mixture of family and friend connections are likely to enjoy higher levels of well-being and physical, mental, and cognitive health. In contrast, individuals in restricted networks tend to report the worst outcomes . Diverse and friend-focused networks appear to provide benefits because they incorporate members on a voluntary basis and provide opportunities for engagement in social activities and integration to the broader society . Diverse networks are also more likely to provide a broader range of support than homogeneous family networks can offer, covering diverse needs for instrumental, emotional, and advisory support . --- From network proximity to geographic layouts A more thorough consideration of geographic distance can further enrich these significant contributions of network typology research. It has become more common for older adults to sustain family connections over longer distances, as a result of higher life course residential mobility in both older and younger generations, family complexities, and preferences for intimacy at a distance . Although long-range connections with friends and non-kin can discontinue when reciprocity is limited, some special relationships persist over time regardless of the distance barrier . In the face of such complexities, "diverse" networks are likely to vary not only in relationship composition but 1 3 also in proximity, as individuals adjust their social convoy to their changing needs and life circumstances. For example, individuals can opt to form or rekindle local non-kin connections in place of or supplement non-proximate-family connections for general support . Meanwhile, following social-emotional selectivity theory , older individuals could opt to focus on relationships that provide the most emotional satisfaction and support, particularly family members. This may still apply when the family connections are located at longer distances. Unfortunately, however, existing network typologies fall short in identifying the rising diversity of the geographic layouts of older adults' core networks. It is partly because they often opt to employ a simple aggregate measure of the overall nearness of one's connections, possibly as a trade-off to incorporate a broad array of network attributes. Examples of such measures include older adult's distance to the nearest family member, such as a child, relative, and sibling ; the number of connections or adult children residing in the vicinity ; and proportion of network members living nearby, such as in the same city , in an hour's drive , and within a 5 km radius . Other typology studies have excluded the proximity factor altogether . Simply counting connections located nearby discards information about more distant connections, whereas the proportion in proximity is heavily influenced by one's partnership status, especially when network sizes are small. Further, the meaning of the proportion of connections in proximity is potentially misleading, especially when interpreted as a standalone measure without specifying network size. For example, someone living together with a partner as the only member in the core network would demonstrate 100% network proximity. Meanwhile, their counterpart with an additional child or friend located out of the 5 km radius would show 50% network proximity but have arguably richer social connectedness and access to support. Consequently, existing typologies have little to say about networks with increasingly diverse geographic layouts, which could be nevertheless active and supportive. The present study offers a more comprehensive description of geographic layouts by simultaneously considering a range of proximity options and the composition of network members. --- Factors shaping network types and geographic layouts Prior research has revealed that network type is a function of demographic traits and contextual variables. Individuals at older ages, with low education and income, who are not married, and who suffer physical limitations tend to occupy restricted or family-focused networks . These characteristics often also correlate with higher network proximity, considering that disadvantaged individuals in higher need of support and care often have limited residential mobility and prefer to live close to their families . In contrast, individuals with diverse network members may also sustain such configurations across diverse geographic layouts. Research has identified that older adults in diverse network types tend to be relatively young, educated and high-earning, engaged in community activities, and in good functional health . These individuals often have more local non-kin connections from joint activities and may discuss important matters with them due to their domain knowledge and easy accessibility . On the other hand, individuals with the most resources often have had the highest lifetime residential mobility. They are also most likely to have the resource and willingness to sustain their connections with family and friends beyond proximity or revive them when needed . In summary, our study will incorporate predictors from earlier research to describe how our geographicfocused typology converges or departs from conventional typologies. In the spirit of exploratory analysis, we incorporate several factors not yet widely present in network typology research that home in on the geographic dimension of networks. First, urbanization has raised concerns about weaker social bonds with community and family members, especially in areas clustered with low-income older adults and among individuals in residential instability . Nevertheless, urban areas also offer higher accessibility to social institutions and more opportunities for informal socializing . In contrast, people in rural areas are more likely to embed in local family-focused networks . Meanwhile, younger adults often move away from rural areas after graduating from universities and thereby become long-range connections in their parent's core network. Second, although communication technologies have alleviated geographic constraints on one's access to support from core confidants, older adults are less likely than the young to be proficient or active users . Limited digital skills can make it harder to sustain core connections over longer distances, though many older adults actively develop these skills when such need arises . Third, older adults often direct their attention and support to their adult children when becoming grandparents . They often strengthen their family connections, especially those in proximity, which may take time and energy from nonkin connections. Fourth, activity participation contributes to social engagement and developing friendships, especially in one's locality . The distribution of different geographic layouts is also likely to vary across European countries and regions. Older adults in the more familistic southern and eastern European countries tend to have higher reliance and expectations on family members for support . We expect they are most likely to have networks primarily consisting of family members living nearby. Meanwhile, Northern and Western European countries have higher levels of individualism and more generous welfare support. Older adults in these countries tend to have higher engagement with friends in their core networks and are more likely to live far from their children . More robust welfare support tend to promote independent living and alleviate reliance on proximate-family support . In this way, we would expect older adults in Northern and Western Europe to have more non-relatives in their core networks and display more dispersed geographic layouts. --- Geographic layouts and network function and qualities A clear picture of network geographic layouts can further help to reveal how such diverse structural settings affect the core networks' function and quality, namely frequency of contact, emotional closeness, and people's overall satisfaction with their network. As a primary indicator of tie strength, frequency of contact is considered a conduit for multiple forms of support. Frequent interactions allow people to communicate about their needs and facilitate the exchange of resources . Higher contact with network members also contributes to higher network stability . At the same time, contact frequency is an aspect of networks that may be especially sensitive to distance. Although technological developments have offered convenient and low-cost options to maintain contact with ties over substantial distances and enable companionship, emotional support, and access to information , people still appear to talk most often to those in their local community. Phone conversations, for instance, are concentrated at close range and are relatively rare beyond close driving distance . Even between parents and their children, a distance-increasing move leads to less conversation, while moving closer by has the opposite effect . Therefore, it seems plausible that geographically proximate networks, on average, enable higher contact volume. Emotional closeness to network members and overall satisfaction with one's network may be similarly-or perhaps more-responsive to distance. However, it remains unclear to what extent core networks stretching across various distances differ in these dimensions of network quality. Though technology likely offers some help in sustaining emotional closeness and network satisfaction, virtual interactions appear less effective than in-person connection for providing on-spot supports, especially minor assistance in daily life . Further, the lack of cues in online communication can also limit exchanges of complex ideas and deep feelings . These drawbacks may have important implications for the quality of distant networks layouts. In sum, as long as virtual and face-to-face interactions remain distinct, we expect some of the more geographically expanded layouts to be associated with lower overall contact frequency and emotional closeness with network members, as well as lower overall network satisfaction. --- Summary of present study To summarize, the goal of the present research is twofold. First, we aim to develop a typology of older Europeans' core social networks focusing on the geographic layouts, examining their prevalence across European regions and their correlation with individual characteristics. Second, we consider how geographic layouts-as structural features of personal networks-are associated with the function and quality of older adults' networks, including contact and emotional closeness with one's connections and overall satisfaction with the network. --- Methods --- Data and sample The current study uses the sixth wave of the SHARE collected in 2015, covering 18 countries, including Austria, Germany, Sweden, Spain, Italy, France, Denmark, Greece, Switzerland, Belgium, Israel, Czech Republic, Poland, Luxembourg, Portugal, Slovenia, Estonia, and Croatia . The data were collected by computer-assisted personal interviews in the respondent's home in the local language. SHARE used a country-specific probability sampling approach to maximize population coverage and offered calibrated weights to compensate for potential selection effects from non-responses and panel attrition. The sixth wave of SHARE contained a social network module, featuring detailed information on one's ego-centric network collected through a name generator approach, which serves as the foundation of the present analysis. We focus on community-dwelling respondents aged 50 and above living in Europe who could answer the questionnaire themselves. To be included in the sample, respondents should have participated in the Social Network Module and identified at least one network connection. We exclude respondents with missing data through listwise deletion, common in research using SHARE . The final analytical sample consists of 35,003 older adults. We also verified our analysis using multiple imputations with chained equations; results were robust to either missing data strategy. --- Indicators for geographic layouts in core discussion networks We use the name generator provided in the social network module to produce respondents' core networks and identify their layouts. The generator asked: "Most people discuss with others the good or bad things that happen to them, problems they are having, or important concerns they may have. Looking back over the last 12 months, who are the people with whom you most often discussed important things?" After collecting the names, the name generator further identified one's relationship with and distance from each network member. We regroup the relationships into relatives and non-relatives. Also, to avoid restrictively small categories, we recode respondents' distance from each network member from eight to four groups, including: in the same household, within 5 km, between 5 and 25 km, and more than 25 km . We choose these cut points because a 5 km radius best represents a local community that allows frequent unplanned face-to-face interaction, whereas 25 km around one's residence represents an area covered by public transit or casual visiting with some planning . From this information, we construct seven dichotomous variables as indicators for LCA. Each indicator shows whether one has a family or a non-kin connection at the following distances: in the household , within 5 km, between 5 and 25 km, and more than 25 km away. Since non-kin connections are rarely in the same household, we reclassify the most proximal category of non-kin ties as within 5 km. In this way, we have a total of 7 indicators = 1 + 2 relationships × 3 distances. We use these indicators for LCA to identify the typical geographic layouts in older Europeans' core discussion network. --- Factors associated with network layouts We have also considered a range of individual characteristics that are associated with one's network layouts, including older adults' education, household financial standing, self-reported computer skills, network size, participation in organized social activities, age, gender, marital status, childlessness, self-rated health, mobility limitations, grand-parent status, and urbanization in the area of residence. Detailed information on operationalization and measurement of these variables is available in appendix. --- Network function and quality We consider three dimensions of the function and quality of older adults' core discussion networks: contact with network members, average emotional closeness to network ties, and overall network satisfaction. Respondents reported the contact frequency with each network member, ranging from 1 = monthly or less to 6 = daily contact. We sum up the responses to develop a network contact scale . Respondents also reported their perceived closeness with each network member, ranging from 0 = not very close to 4 = extremely close. We take their average to summarize one's emotional closeness with network members . Respondents' overall network satisfaction measures "how satisfied are you with the relationships you have with all the people we have just talked about?", which takes an integer value ranging from 0 = completely satisfied to 10 = completely satisfied . --- Analytic strategy Our first step is to identify and interpret the geographic layouts of older adults' social networks. We conduct latent class analyses using Mplus 8.6, using the presence of family and non-kin connections at various distances in the networks as indicators. As a data reduction technique, LCA can identify unobserved groups in the sample based on a set of categorical indicators; the method also estimates each participant's group membership based on the highest estimated probability among the groups . We identify the optimal number of classes based on model fit indicators such as Akaike's information criterion , Bayesian information criterion , entropy, and the results of model improvement tests . We then interpret each identified sociogeographic layout based on the conditional probabilities of the indicators. Second, we used multinomial logistic regression to reveal how individual characteristics are associated with the geographic layouts and examined their prevalence across European regions. In the third and final step, we used ordinary least squares regression to examine how geographic layouts as structural factors correlate with network function and quality, namely older adults' contact and emotional closeness with core network members as well as their overall satisfaction with the network. --- Results --- Geographic layouts We fit models for 1-8 latent classes and confirm that the seven-class solution is optimal. It has the lowest BIC and the highest entropy among the solutions while offering reasonable group sizes. Although the eight-class solution's AIC is slightly lower, its entropy was lower than the sevenclass solution with a higher SSC-BIC. The fit statistics for the models with 1-8 latent classes are available in Appendix Table 1. We assign each respondent to the class whose probability was highest according to the LCA. The classification probabilities for the most likely latent class membership range from 0.71 to 1, showing a reasonable level of certainty in assigning individuals to a specific latent class. We present the seven latent classes in Table 1, followed by the multinomial logistic regression of geographic layouts on individual characteristics in Table 2. We present the results in average adjusted predictions and average marginal effects to show who is in which geographic layout. The AAPs show the predicted probability of the dependent variable for individuals in a particular group, asserting everyone to a specific category of the independent variable while keeping other variables at their actual values for each respondent. The AMEs further show the differences in the AAPs between groups, or the effect of a unit change in each independent variable. The first four network layouts are predominantly familyoriented. The first layout we identify is household-based networks, in which older adults have at least one family discussant in the same household . In contrast, external connections with family and non-kin are both limited. As shown in the AMEs in Table 2, older adults who are younger, male, married, in good health, and without young grandchildren are especially likely to have a household-based network, as are those with limited education and not participating in organized activities. We name the second group proximate-family networks, as it comprises people who mentioned family connections The third layout is mid-range family networks, in which one or more family connections lived at mid-range . AME estimates suggest that highly educated individuals and those without children are relatively unlikely to fall into this group. Meanwhile, people currently working and having young grandchildren are somewhat likely to have such networks. We name the fourth layout distant-family networks, which features family connections at longer ranges . People in this group tended to have mid-or high education, dwell in rural areas, and not be widowed. A similarity of the third and fourth classes is limited external connections within closer proximity , regardless of families or non-kins. The remaining three geographic layouts are diverse ones featuring non-kin connections located at variant distances. We name the fifth group proximate-diverse networks, highlighting the presence of at least one non-kin confidant within the 5 km radius . Only 32.3% of individuals in such a network mentioned having a discussant in the same household, the lowest among the identified layouts. Individuals also tend to name other family discussants at variant distances. Table 2 shows that women, unmarried, childless, and active participants in organized activities are especially likely to possess proximate-diverse networks. Meanwhile, rural residents and those with good computer skills, free from financial difficulties, and having young grandchildren are relatively unlikely to have such networks. We call the sixth group expanded-diverse networks, in which older adults mentioned one or more non-kin connections between 5 and 25 km . Older adults in expanded-diverse networks are also likely to list family discussants at varying distances, plus having a nonkin connection in proximity as well . Those occupying this group are disproportionately highly educated, active in organized activities, unmarried, and childless. Meanwhile, individuals living in a rural area or having a young grandchild are less likely to be in this group. Finally, we identify the seventh group as far-reachingdiverse networks. Compared to groups 5 and 6, respondents in such networks have non-kin connections living at longer distances with relatively limited family connectivity nearby . Still, they often have a family network member in their household and non-kin connections in proximity . It is the smallest group of all geographic layouts , and people in this group tend to be highly educated but disproportionately childless Fig. 1 The average number of family and non-kin connections at each distance, in each latent class or divorced. We present the composition of each geographic layout in Fig. 1. --- Distribution across European countries We compared the typical network compositions across European countries, sorting them based on the total proportion of family-oriented network layouts from the highest to the lowest in Fig. 2 . Overall, familyoriented network layouts are more prevalent in Southern and Eastern European countries, consistent with the north-south division identified in the existing literature. More specifically, older adults are especially likely to have householdbased networks or proximate-family networks in Eastern and Southern European countries , the only exception being Estonia . Regional differences in the more proximate-family network layouts are more pronounced than mid-range and distant-family networks. When it comes to diverse network layouts featuring nonkin members, findings reveal that expanded-and far-reaching-diverse networks drive regional differences more so than proximate-diverse networks. For instance, proximate-diverse networks are relatively prevalent in several Southern and Eastern countries, exceeding Denmark and Sweden from the north and Central countries such as Germany and Switzerland. In contrast, the division between Northern/Central and Eastern/Southern countries is evident in expandedand far-reaching-diverse networks, both of which are more prevalent in the former set of regions than the latter one . --- Network layouts, function, and quality Figure 3a,b,c,d showcases the result from the OLS regressions on the association between socio-geographic layouts and network function and quality, adjusting for all the covariates we have used in the present research. To avoid arbitrarily selecting one network layout as the reference group and the basis of comparison, we compare the predicted value of each group with the dependent variable's grand mean to depict how variation in network function/ quality is a function of geographic layout . Not surprisingly, as shown in Fig. 3a, household-based networks and proximate-family networks offer the highest level of network contact , followed by proximatediverse networks . While individuals' contact with their mid-range and distant-family Fig. 2 Core network geographic layouts in each country. Note: countries sorted by the total proportion of family-oriented network layouts networks is close to the overall average, their counterparts in expanded-and far-reaching-diverse networks have the lowest estimated amount of network contact . Frequent contact with network connections, however, does not necessarily translate to high levels of emotional closeness. As Fig. 3b shows, respondents express higher emotional closeness with family-oriented networks than diverse networks featuring non-kin discussants, regardless of proximity. In addition, the emotional closeness with network members is not always lower in more geographically expanded network layouts. Indeed, older adults' emotional closeness with members in distant-family networks is comparable to their counterparts who have proximate-family networks. Such a pattern similarly applies to diverse networks, where far-reaching-diverse networks feature higher emotional closeness than the proximate-and expandeddiverse networks. Additional analysis further confirms that these patterns still hold even after controlling for different levels of contact with network members . Lastly, we turn to an overall evaluation of network satisfaction. Figure 3c shows that the family-centered layouts are not statistically distinguishable from one another on this variable, as evidenced by the overlapping confidence intervals. All four family-oriented network layouts feature network satisfaction above the overall average. In contrast, satisfaction with diverse networks is significantly lower than the average in all three geographic layouts. Additional analysis shows that the pattern persists even if confining the sample to northern/central European countries or their southern/eastern counterparts. One concern of the network satisfaction estimates is that differences in contact frequency depending on geographic layout may suppress the patterns depicted in Fig. 3c. That is, frequent contact with network members boosts network satisfaction , so the convenience of seeing nearby network members could distort some meaningful variation in satisfaction across geographic layouts. A suppression pattern does not seem to apply to family-oriented networks; differences across geographic distance remain virtually unchanged in the model Fig. 3 Results from the regressions of contact with network members, average emotional closeness with network members, and overall network satisfaction. Notes: Results for each network layout are presented as average adjusted predictions . The red lines represent the overall average of the corresponding dependent variable. The model's control variables include age, gender, education, employ-ment, financial status, marital status, functional limitations, self-rated health, rural vs. urban context, network size , computer skills, having no children, organized activity participation, grandparent status. The model presented in Fig. 3d also controlled for average contact that adjusts for contact frequency from the model that does not . Result suggests that people tend to be relatively satisfied with networks composed mainly of family members, regardless of whether they are immediately accessible or not. For the compositionally diverse networks, however, there does seem to be a suppressive pattern. Once network contact is adjusted, gaps between proximate-diverse and expanded/far-reaching diverse groups widen, the former group becoming significantly different from the latter two. Far-reaching-diverse networks align with the overall average of network satisfaction, becoming statistically indistinguishable from the family networks . --- Discussion Intimate and supportive network connections are critical contributors to the well-being of older people . As older adults' networks have become increasingly diverse , there are rising concerns about the absence of proximal connections and the consequences on older adults' social life and well-being . The present research addresses these concerns by establishing a new geographic-focused typology of older adults' core discussion networks. The new approach identifies typical geographic layouts of older Europeans' core discussion networks, revealing unique diversities unacknowledged in existing network typologies. It also provides a new perspective on how different network structures link to function and quality. The present geographic-focused typology reveals spatial diversities in family-focused networks, one predominant network form identified in previous research . Our typology reveals household-based, proximate-, mid-range, and distant-family networks. Existing typologies characterize those in restricted and family-centered networks as at advanced ages, unmarried, and have lower incomes and more physical limitations . However, to our surprise, discussion networks limited to the household are not necessarily a sign of frailty or disadvantage, as adults occupying them tend to be younger, married, in good health, and having one or more children. In our typology, disadvantage and marginalization are more apparent among older adults in what we term proximate-family networks. Adults with these network forms tend to be at advanced ages, have lower education, and be widowed. These observations correspond to the depiction of older adults in high need of ready support, many of whom have relocated for family accessibility reasons . Moreover, individuals in proximate-family networks also often mention family discussants in intermediate or even longer distances. Future research needs to pay more attention to the roles of these non-proximate-family discussants, especially how they coordinate with the more proximate ones. Further attesting to diversity in family-focused networks, our geographic-focused typology also identifies mid-range and distant-family networks. Analogous to the empty-nest metaphor, these layouts present an "empty neighborhood" scenario where people have few families or non-kin discussants readily accessible yet sustain family discussants at a distance . Results show that a sizable group of older adults hold on to a spatially scattered assemblage of family member discussants rather than enlisting non-kin discussants as alternatives . Such distantfamily networks are most prevalent among older adults with high educational backgrounds and those living in rural areas. Individuals with higher education may have higher mobility in their life courses , and the younger generations may live away from their rural hometowns . This observation adds nuance to previous findings that more educated individuals possess more diverse networks ). It turns out many still prefer to keep their discussant network within the family and are more accommodating with an expanded geographic layout. Emphasizing the proximity perspective in family-oriented network types also reveals the nuance in regional patterns. As anticipated, the four family network types combined are more prevalent in the Southern and Eastern European countries, likely attributable to their familistic culture and limited welfare provision . The present research also shows that older adults who occupy family-oriented networks in the more individualistic northern and western countries disproportionately embrace the "empty neighborhood" scenario and maintain family discussants at longer distances. This finding suggests that the "intimacy at a distance" between family members can stretch over relatively long distances, especially in the more individualistic countries. Our geographic-focused typology also expands the notion of "diverse networks", which in existing research emphasizes compositional-but not spatial-forms of diversity . We identify proximate-, expanded-, and far-reaching-diverse networks. Our findings depart from studies that bind diverse networks with advantaged statuses, such as younger age, higher educational background, higher income, engagement in community activities, and good functional capabilities . Our findings suggest that compositionally diverse networks are more likely a sign of advantage only when complemented by flexible spatial arrangements. Indeed, high levels of education are associated with occupying expanded-and far-reaching-diverse networks, not proximate-diverse networks. Likewise, being free of financial hardship predicts a lower likelihood of being found in a proximate-diverse network form. This observation echoes recent findings that local non-kin ties could be more prevalent in high-poverty neighborhoods . Listing only non-kin discussants in proximity could be attributed to their mere accessibility for general support rather than their knowledge, skills, trustworthiness, or intimacy . Examining compositionally diverse networks through the lens of geographic layout also extends our understanding of their regional patterns . Our findings indicate that proximate-diverse networks vary little across the Europe and are prevalent across all regions. Individuals may list accessible non-kin discussants to fulfill the imminent needs for practical support or compensate for insufficient family support . Indeed, it seems that the overrepresentation of diverse networks in Northern and Central European countries owes primarily to the higher prevalence of expanded-and far-reaching-diverse networks. It is possible that in the more individualistic and high-trust northern and central European countries , individuals are particularly open to non-intimate yet knowledgeable discussants that fit their needs , sustaining many of them beyond proximity. The present research also evaluates the implications of the variant geographic layouts of older adults' core discussion networks. One of the key takeaways is that the structural feature of distance has little association with several important aspects of overall network function and quality, particularly emotional closeness and satisfaction. Unsurprisingly, people talk more to their network when it is nearby. Family-centered networks have the most contact, yet a diverse network close-by features more frequent contact than one filled with kin members far away. There is also an expected gap between kin-based and diverse networks regarding emotional closeness. Still, there is no evidence that emotional closeness trails off when network members are farther away within each of these relational categories. As for the nonlinear association of distance and emotional closeness suggested within family and diverse network groupings, the pattern may reflect selectivity. That is, the reason that farther-off network members are in the network in the first place-despite fewer chances for interaction-is that these people are emotionally significant. Further, once contact frequency factors into the analysis, we cannot statistically distinguish the network satisfaction for far-reaching-diverse networks from the family-centered forms. For network satisfaction, the arrangement that stands most clearly apart from all others is the proximate-diverse networks, which include close-by non-kin members, frequently alongside family member at varying distances. The high average levels of interaction shared with such networks may obscure that the encounters are often more a function of convenience than choice. Network satisfaction skews toward high contentment, so this should not be interpreted to mean that people are dissatisfied with networks that cluster nearby. Nevertheless, results suggest that if it were not because proximate network members are most accessible, there would be a relatively wide gap in network satisfaction between the 20% of older adults in proximatediverse networks and those with other networks. Close-by non-kin can meet many social support and companionship needs , but proximity does not by itself confer a higher satisfaction. The present study has some limitations. As we used latent class analysis to identify the typical network compositions among older Europeans, the present study was cross-sectional and descriptive. It is a starting point to trace how network members' proximity may change along the aging process and how it might matter for outcomes such as social support provision and well-being. Consistent with the crosssectional design, we are not making causal arguments about how individual characteristics produce a given network type. Instead, we see our efforts as describing which older adults fall into which type of network and providing a preliminary answer to how each network type links to a set of fundamental aspects of network function and quality. We also acknowledge that we cannot specify the type of support one receives from each connection, especially when they source from variant distances. The proximate connections are often better positioned to provide the support that requires physical co-presence, such as personal care, transportation, household help, assistance with paperwork, and other instrumental help . Meanwhile, connections at longer distances have also become more capable of providing companionship, emotional support, advice, and service arrangements, thanks to the new communication technologies . Future research with more comprehensive network data should further identify the patterns in who provides what support at varying distances. Finally, our choice of thresholds in grouping a connections' proximity is somewhat arbitrary. Part of this results from the design of the SHARE data, which measures confidants' proximity in categories without fine-grained cut points between 25 and 100 km. In addition, the perception of proximity could vary across individuals, urban/rural regions, and countries across Europe. That said, 25 km often represents the distance traversable within an hour's drive, a metropolitan area covered by public transit, or a visit that requires little planning. In addition, parents who do not have a child within 25 km are more likely to incorporate non-family support . Future research on network typologies would benefit from more detailed proximity measures. --- Conclusion In sum, the present research addressed a gap in the literature by revealing how older adults combine family members and non-relatives at different proximity in their networks, a phenomenon not yet fully recognized by existing typologies. It highlighted the importance of looking beyond the mere presence of proximate connections in older adults' close social networks. In the emerging context of networked individualism, where there are more options than ever for keeping in touch, physical proximity still matters in core discussion networks. Though proximity fosters contact, more scattered network layouts and longer distances are not insurmountable obstacles to network satisfaction, especially with sufficient contact with the members-a condition becoming ever more realizable through technology advances. We also found that proximate-diverse networks are common among older Europeans, even in countries considered to be most familyoriented. Such a layout provided interactions comparable in frequency to networks primarily consisting of nearby family members, even if somewhat lower in overall network satisfaction. Especially in light of the many disruptions caused by the COVID-19 pandemic and the possibility of future stay-at-home orders, researchers must keep an eye on the changing spatial configurations of older adults' core networks and the implications of these developments for their social connectedness and well-being. --- Appendix 1 This section describes the operationalization and measurement of covariates included in multinomial regression predicting geographic layout classes and in regression analysis of network contact, emotional closeness, and satisfaction. Starting with SES-related factors, we regroup older adults' educational background from the 7-level ISCED-97 criteria to three categories, including 1 = low education ; 2 = medium education ; 3 = high education . We measure employment status as a dichotomous variable, coding currently employed or self-employed as 1 and 0 otherwise. A self-evaluation measured an individual's household financial standing: "thinking of your household's total monthly income, would you say that your household is able to make ends meet?" The answer ranges from 1 to 4, where 1 = with great difficulty; 2 = with some difficulty; 3 = fairly easily; 4 = easily. This financial standing measure evaluates perceived financial strain regardless of diverse income sources and welfare availabilities. We also consider other individual resources that facilitate interaction and general aspects of social connectedness. We code respondents' self-reported computer skills as 1 for an excellent, very good, or good rating, and as 0 for fair and poor ratings, or never used a computer. One's network size is a count measure of up to seven network members. We identify any participation in organized social activities as 1 and 0 otherwise, which includes "doing voluntary or charity work," "attending an educational or training course," "going to a sport, social or other kinds of clubs," and "taking part in a political or communityrelated organization." We also consider several social demographic backgrounds and health statuses that may affect one's network layout. Respondents' age in 2015 is a continuous variable. We code gender as 0 = male and 1 = female. Marital statuses include married, never married, divorced, and widowed. We label childless individuals as 1, otherwise 0. Regarding self-rated health, we group excellent, very good, and good health as 0, which serves as the reference group, and fair or poor health as 1. We code mobility limitation as 1 for older adults reporting three or more instances of mobility, arm function, and fine motor limitations, and as 0 otherwise. We identify grandparents who have a young grandchild under five years old as 1 and otherwise 0. Regarding the level of urbanization in the area of a respondent's residence, we consider a big city, suburbs of a big city, or a large town as more urbanized areas = 0, and a small town, a rural area, or a village as less urbanized areas = 1. See Tables 3,4, 5, 6. --- Funding No funding was received to assist with the preparation of this manuscript. The authors have no relevant financial or non-financial interests to disclose. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A growing number of older adults maintain connections in their core discussion networks at variant distances, raising concerns about the lack of discussants in proximity and the consequences on their social life. This study examines the typical geographic layouts for aging Europeans' core discussion networks and their implications for network function and quality. With a sample of community-dwelling respondents aged 50 and above from the Survey of Health, Aging, and Retirement in Europe, the present research identifies seven geographic layouts that extend previously identified family and diverse network types by adding spatial nuance. Individuals in mid-range and distant-family networks typically lack a discussant nearby but sustain high emotional closeness with family discussants at a distance and express high overall satisfaction with their network. Proximate-diverse networks with a strong representation of non-kin members nearby turn out to be less advantageous than prior research might suggest, providing relatively frequent contact but the lowest level of network satisfaction. Results also identify how individual characteristics link to the geographic layouts and describe their prevalence across European regions. Overall, relatively dispersed layouts are common for older adults across multiple countries and do not necessarily indicate lower emotional closeness and network satisfaction. The present study highlights the importance of looking beyond the mere presence of proximate connections in older adults' core networks.
Introduction Despite the hopes expressed at the 2003 and 2005 World Summits on the Information Society that digital inclusion policies would broaden information access in remote regions of the globe, help strengthen democracy, and foster human development among the poor, a digital divide persists amid the rapid diffusion of information and communication technology adoption in the developing world. Starting in the mid-1990s, community telecenters and public access points proliferated throughout Latin America to provide Internet connectivity for marginalized populations. Over the past decade, many of these initiatives have collapsed under the burden of structural, political, economic, social, or other obstacles. Furthermore, and as predicted by Norris , digital inclusion policies at times aggravated existing patterns of social stratification and failed to generate equal opportunities for development. As more and more public access initiatives are shuttered or abandoned, and growing numbers of users connect to the Internet from home or via mobile technologies, there is growing evidence that the model of public access telecenters as nodes for digital inclusion may be in crisis. In Brazil as elsewhere in Latin America, ICT appropriation and Internet access have risen apace since the adoption of e-government initiatives and universal access policies in the mid to late 1990s, yet substantial challenges to digital inclusion persist, due to overarching patterns of social, economic, and structural inequalities . Political interests, societal structures, and corporate profiteering have conspired to delay the rollout of universal access initiatives and exacerbate the digital divide . Still, there is evidence that digital inclusion initiatives can impact remote and isolated communities by creating opportunities for entertainment, civic participation, and professional capacitybuilding that may foster human development, build social capital, and connect communities to global society . In this context, and informed by Wilson's strategic re-structuring model, the present study seeks to further our understanding of how individual perceptions impact ICT adoption in remote rural mountain regions, where digital exclusion remains most pronounced. The issue is of significance because as researchers examine how marginalized populations perceive the practice of digital literacy, they can better understand the factors that impact the sustainability of initiatives that promote universal access and digital inclusion. --- Literature review Scholars in the field of community informatics and ICT for development who seek to theorize ICT adoption increasingly examine not only the attitudes and behaviors of ICT users, but also those of the communities and networks in which they reside. There is growing consensus that studies that expand on diffusion of innovations and development communication models with theories of social capital provide a more comprehensive framework for understanding community informatics, as proposed by Simpson . In a similar vein, Gurstein called for new theoretical and conceptual models in community informatics to incorporate the community-level processes familiar to development communication frameworks in order to better examine all the factors that lead to successful ICT adoption. In addition, his call for a Freirean approach to community informatics, invites us to consider, in this study how recent rapid ICT adoption coexists apace with longstanding patterns of inequality. --- Internet Access in Brazil Almost one-half of the 277 million Internet users in South America are in Brazil, where in mid-2016 67.5% of the population was online . The nation's adoption of universal access provisions into law in 2011 , which ! 123 codified information access as a civil right, encouraged investment in the sector. As telecommunications companies sought to expand the consumer market for home computing and mobile devices among the working poor and lower income populations, Internet connectivity rolled out to remote and marginalized communities in response to pent-up demand in settings where fixed telephone lines were few, unaffordable, or unavailable. Yet, while metrics for time spent online, social media subscriptions, and the use of communication and search functions confirm the rapid rise in Internet adoption in Brazil, Internet access remains strongly correlated to educational attainment and income . This trend lends cause for concern; all the more so in light of research that shows growing emphasis worldwide on ICT functionality and services that privilege consumerism further accentuates the digital exclusion among low-income populations . --- Perceptions on the use of the Internet and Data Literacy Scholars have identified that persistent challenges in the quest for sustainability of community informatics initiatives in the developing world can be borne out of structural limitations , financial encumbrances , socio-cultural constraints and more . Whereas structural and economic concerns pose steep barriers to ICT adoption in many rural communities, socio-cultural dynamics have been also shown to hinder the sustainable diffusion of ICTs. As early as 1999, Van Dijk categorized four major challenges to ICT use, and included among them individual criteria such as unfamiliarity with the logic of ICT interfaces, and the lack of relevant and engaging content as reasons for discouragement among new learners' lack of opportunity to engage with ICTs. Ample evidence has since surfaced to indicate that low levels of standard literacy and educational attainment, language barriers, and expectations of conformity regarding traditional age and gender roles, can deter individuals from the practice of digital literacy . It has been established that this skills gap is particularly wide among populations hindered by structural inequalities, and intensifies the persistence of a digital divide among the poor . Furthermore, as Tygel and Kirsch remind us, data are the product of a social construction that require the audience be able to read critically and contextually. In settings where social position predetermines one's participation in economic networks, access to knowledge is inevitably stratified. Provided limited access to ICTs and absent social and cultural capital that support ICT use, marginalized populations may lack the skills or motivation to engage in data literacy. Equally relevant to the topic is the work of Bhargava, et al. , who highlight the importance of data literacy as a tool to promote empowerment and government accountability, as well as way to fight social exclusion and more. Among the challenges of promoting data literacy via public policy, the authors noted that the concept can be misconstrued or can be difficult to properly assess. Their definiton of data literacy as a willingness and ability to use data for civic engagement implies that, beyond the technical skills to know how to access data, individuals must also know what to do with data in order to bring positive changes to society. As digital technologies become prevalent they also increasingly play gatekeeper to ! 124 economic opportunities and social inclusion. Absent data literacy, and its ensuing affordances, inequality endures among disadvantaged groups . Thus, the proposition that data literacy serves as a catalyst for social change requires further analysis, particularly in those marginalized communities where the limited access to ICTs has contributed to an increasing digital divide. Wherever Internet access is available in remote and impoverished areas, the extent to which an individual will use data for the benefit of her community will be mediated by her perception about Internet use and by the social dynamics of the community where she lives. In order to further understand the socio cultural dynamics that impact ICT adoption among the rural poor, this study set out to examine how individuals in remote rural communities perceive Internet use, and to analyze the factors that impact those perceptions. --- Methods The following is a description of the methods used in the analysis of the factors that motivate individuals to appropriate ICTs and the behaviors and attitudes that advance both data literacy and digital literacy in remote rural communities. Between May 17 and 22, 2012, trained facilitators collected data from residents in the mostly agricultural communities of Tombadouro and São Gonçalo do Rio das Pedras, in Minas Gerais, Brazil, using an interviewer-assisted survey. Each of these villages averages an estimated population of 1,000 residents, the majority of whom subsist from agricultural, extractivist, or tourism related economic activities. Respondents were required to provide written consent, and parental consent was obtained for those under the age of 18. The survey was administered with a systematic point sampling technique and it provides a statistically valid representation of the sampled population. Since one of the primary interests of this research was to categorize individual perception about the use of the Internet across a few general categories, factor analysis was applied to survey data that measured perception of the Internet with a Likert scale for each of the following 10 items: It helped solve problems in the community It caused quarrels in the community Principal components factoring was applied to extract the initial factors. As observed in Table 1 and 1bis, the first four components have eigenvalues greater than 1 and they explain about 70% of the combined variance of all 10 variables. The decision of the number of factors to be extracted is depicted graphically in Figure 1. The next step was the rotation of the factor structure using the varimax method. As shown in Figure 1, variables q11a, q11d, q11i and q11j load higher on factor 1 . All these variables refer to benefits for the community. As for factor 2, variables q11c, q11f, and q11g load most heavily on it. It is interesting to note that all of these variables are related to benefits for the youth. In the case of factor 3, all variables associated with it refer to economic benefits: q11e and q11h. Finally, the last factor is basically made of variable q11b, which can be associated with a negative perception on the use of Internet. --- Results Tables 2, 2bis and 2plus shows the results for the extraction of the four factors and the variables associated with it. The imagery of this procedure is provided in Figure 2. Based on the results from factor analysis, we observe that individual perception of Internet use in these communities can be classified in four broad categories: social benefits for the community, opportunities for the youth, economic benefits for the community, and negative impact for the community. The questions that emerge from these results are: which of these factors did respondents consider most relevant for the community, and whether people in general share the same perception, or whether perception of Internet use varies within the community according to social markers such as gender. The answers to these questions could shed light not only on how people in marginalized communities think about the use of Internet, but also on the likelihood and patterns for Internet appropriation, and the potential for data literacy. These are questions this study explored through the use of regression models. Previous studies have shown how the use of mobile telephony and Internet separately may increase the productivity of microbusinesses in ways that bring positive economic effects to small communities. This, in turn, can contribute to deliver better sources of information and communication to the population . However, the underlying assumption in each case is that people with access to ICTs will automatically experience positive social change, without regard of ex-ante perceptions that might condition ICT adoption. One of the interests of this study is to further explore how people's perception impacts the use of the Internet via mobile telephony. A separate regression model was used to estimate this, where the dependent variable is a latent variable that refers to individual use of Internet on a mobile phone. This was operationalized through a proxy variable in question 3, which refers to the use of the Internet via mobile phone as one of the reasons for not visiting the telecenter. Future studies may consider a more adequate proxy. As for the explanatory variables, besides people's perception of the use of Internet, demographic variables were also included . Since the dependent variable is related with the reasons for not going to the telecenter it is important to control for other factors, such as having a computer at home. --- Discussion The results of the multinomial logit are shown in Table 3. The last four columns refer to each of the broad categories related to individual perception of Internet use. For instance, the second column, base outcome, refers to individuals who perceive that use of the Internet brings social benefits for the community. The regression in the third column refers to individuals who think that use of the Internet brings opportunities for the youth. Column four shows a similar regression for those who perceive that use of the Internet can bring economic benefits for the community, and column five shows those who have a negative perception about Internet use. --- Base category for education: Unfinished lower school The estimates indicate that women were more inclined than men to perceive that Internet use provides social benefits for the community over and above all else. In the category related to education, results show that, in general, individuals with higher levels of education are more likely to perceive that use of Internet provides more opportunities for the youth or more economic benefits for the community. The regression model of Internet use with mobile phones is also a probabilistic model. Given the characteristics of the dependent variable, we estimated a simple logit model . Not surprisingly, there was a lower probability that individuals with a negative perception of the Internet would check the Internet on their mobile phones. Results also show that women were less likely than men to surf the Internet on their mobiles. As for the level of education, people with a college degree were more likely to check the Internet on their mobile phones than those who did not finish lower school. Other factors were not found to be statistically significant at the 0.05 level. ! 132 number of covariates in the full model . The p-value for this chi-squared is 0.029. Based on this, it is possible to reject the null hypothesis that the model with the intercept predicts better than the full model. In other words, the full model is a better fit. It is worth noting that the estimated variance for our latent dependent variable is 4.488. This is the proportion of change due to the variation of the covariates. The existence of a gender divide in ICT use has been widely documented; over the past decade, researchers have pinpointed the various challenges to adoption and identified contradictory factors that contribute to the persistence of this gap . The present study corroborates that evidence, having found that gender and education impact one's perceptions of ICT use. The results show that women are likely to perceive the benefits of ICT differently from men. The findings also indicate that women are less likely to rely on mobile connectivity to access the Internet. Overall, the findings lend support that, ICT adoption is not gender neutral and that successful digital inclusion requires minimal educational standards be met . For women to participate effectively in an increasingly digital society, structural access to ICTs is a first step, but instruction in data literacy must follow. The effect of education is of particular interest since around 67 percent of the respondents showed low levels of educational attainment. This confirms previous findings by Olinto and Fragoso and introduces two possible outcomes: Internet use and data literacy can either drive positive social change or it can contribute to a dystopia. The first outcome is expected when individuals are engaged with their communities and become actors of social change through data use. The second outcome is likely to happen when data literacy exacerbates existing structural problems like inequality and social exclusion . Whereas previous research by Gandy examines the latter case scenario, Prado, Camara, and Figueiredo contributes to a more positive perception and increased Internet use, then greater investments in digital inclusion might reduce inequality. This study found that women were more likely than men to perceive Internet use as a source of information sharing and collaboration that was beneficial to the social and economic life of their community. On the other hand, women were less likely to perceive the Internet as valuable source of opportunity for the young. These results hint at a limited grasp of the potential of the Internet. They may reflect the consequences borne out of lower rates of ICT adoption and data literacy among the women. Absent the knowledge, exposure, and skill sets, women would be less likely to value a content they do not thoroughly understand or know. Whereas women are able to see the Internet as a new educational and informational tool, their limited data literacy undermines their grasp of its capabilities as a tool for economic or professional development. This would seem to be the case, given that better educated individuals were found to perceive the economic benefits of the Internet for the community and for its youth. Those same individuals were, however, less likely to see Internet use as otherwise beneficial to the social life of the community. The above findings are of concern because social capital has been shown to play an important role in facilitating ICT adoption by rural women in the developing world . Furthermore, individual motivation and self-interest have been found to be key markers among community leaders who champion ICT use Wilson . Absent these qualities, ICT adoption is less likely to take hold. The gender gap in perceptions of ICT use suggests an existing disparity in access to skills and knowledge among women and men. It also identifies how a gap in educational attainment manifests in a poor understanding of the capabilities and potential that can result from digital inclusion and data literacy. --- Conclusions In 2015, when more than three billion people in the world could connect to the Internet, mobile-cellular subscriptions worldwide reached more than seven billion . As mobile connectivity becomes increasingly available in rural areas and the next generation of Internet-enabled devices looms, more attention must be paid to the gender gap in mobile ICT adoption. Insofar as the persistence of a gender divide in rural communities impacts digital inclusion and data literacy, it presents an obstacle to the promotion of human development in these areas of the developing world. The findings in this study expand our understanding of the nature and scope of ICT adoption among the rural poor. They are nonetheless limited by the small sample sizes and limited geographic scope of this study. More research is needed to understand the extent to which the gender gap impacts ICT adoption in remote rural communities.
Results confirm the presence of a gender divide in ICT adoption. Women were more likely to perceive that ICT use brings social benefits to the community, and considered that ICTs provide better opportunities for the young.
I. INTRODUCTION Users register on social networks to keep in touch with friends, as well as to meet with new people. Research works have shown that a big majority of people that we meet online and add as friends are not random social network users; these people are introduced into our social graph by friends [2]. Although friends can enrich the social graph of users, they can also be a source of privacy risk, because a new relationship always implies the release of some personal information to the new friend as well as to friends of the new friend, which are strangers for the user. This problem is aggravated by the fact that users can reference resources of other users in their social graph; and make it very difficult to control the resources published by a user. This uncontrolled information flow highlights the fact that creating a new relationship might expose users to some privacy risks. We cannot assume that friends will make the right choices about friendships, because friends may have a different view on people they want to be friends with. Considering this, privacy of a social network user should be protected by building a model that observes friendship choices of friends, and assigns a risk label to friends accordingly. Such a model requires knowing a user's perception on the risks of friends of friends. We made a first effort in this direction in [3] by proposing a risk model to learn risk labels of strangers by considering several dimensions. To validate the model, we developed a browser extension showing for each stranger his/her profile features, his/her privacy settings, and mutual friends. Based on this information, the user is asked to give a risk label l ∈ {1, 2, 3} to the stranger. These risk labels correspond to not risky, risky and very risky classification of a stranger. Through the extension, 47 users have labeled 4013 strangers. However, we did not consider risk of friends. This new work starts with considering two factors in assigned risk labels. First, strangers can be risky only because of their profile features. Second, a friend himself can increase or decrease the risk of a stranger. Increases and decreases will be termed as negative and positive friend impacts, respectively. In any case, if a risky stranger is introduced into the user's social graph it is because of his/her friendship with a friend. However, determining the friend impacts can help us to determine which privacy actions should be taken to avoid data disclosure. We aim at learning how risk labels are assigned to strangers depending only on their profile features, and how much a friend can impact these labels. If strangers are risky just because of their profile features, privacy settings can be restricted to avoid only these strangers. On the other hand, if a friend increases the risk labels of strangers, all of his/her strangers should be avoided. We begin our discussion with reviewing the related work in Section II. In Section III we explain the building blocks of our model and Section IV shows how we use our dataset efficiently. In Section V we discuss the role profile features in risk labels, and in Section VI we show how impacts of friends are modeled. Section VII explains finding risk labels of friends from friend impacts, and in Section VIII we give the experimental results. --- II. RELATED WORK Friends' role in user interactions has been studied in sociology [19], but observing it on a wide scale has not been possible until online social networks attracted millions of users and provided researchers with social network data. For online social networks, Ellison et al. [7] defined friends as social capital in terms of an individual's ability to stay connected with members of a previously inhabited community. Differing from this work, we study how friends can help users to interact with new people on social networks. Although these interactions can increase users' contributions to the network [21] and help the social network evolve by creation of new friendships [23], they can also impact the privacy of users by disclosing profile data. Squicciarini et al. [20] have addressed concerns of data disclosure by defining access rules that are tailored by 1) the users' privacy preferences, 2) sensitivity of profile data and 3) the objective risk of disclosing this data to other users. Similarly, Terzi et al. [14] has considered the sensitivity of data to compute a privacy score for users. Although these works regulate profile data disclosure during user interactions, they do not study the role of friends who connect users on the social network graph and facilitate interactions. Indeed, research works have been limited to finding the best privacy settings by observing the interaction intensity of user-friend pairs [4] or by asking the user to choose privacy settings [8]. Without explicit user involvement, Leskovec et al. [12] have shown that the attitude of a user toward another can be estimated from evidence provided by their relationships with other members of the social network. Similar works try to find friendship levels of two social network users . Although these work can explain relations between social network users, they cannot show how existence of mutual friends can change these relations. Privacy risks that are associated by friends' actions in information disclosure has been studied in [22], but the authors work with direct actions of friends, rather than their friendship patterns. Recent privacy research focused on creating global models of risk or privacy rather than finding the best privacy settings, so that ideal privacy settings can be mined automatically and presented to the user more easily. In [3], Akcora et al. prepared a risk model for social network users in order to regulate personal data disclosure. Similarly, Terzi et al. [14] has modeled privacy by considering how sensitive personal data is disclosed in interactions. Although users assign global privacy or risk scores to other social network users, friend roles in information disclosure are ignored in these work. An advantage of global models is that once they are learned, privacy settings can be transfered and applied to other users. In such a shared privacy work, Bonneau et al. [6] use suites of privacy settings which are specified by friends or trusted experts. However, the authors do not use a global risk/privacy model, and users should know which suites to use without knowing the risk of social network users surrounding him/her. --- III. OVERALL APPROACH We will start this section by explaining the terminology that will be used in the paper. In what follows, on a social graph G u , 1 hop distance nodes from u are called friends of u, and 2 hop distance nodes are called strangers of u, i.e., strangers of user u are friends of friends of u. We will denote all strangers of user u with S u , and risk label of each stranger s ∈ S u that was labeled by u will be denoted as l us ∈ {1, 2, 3}. A social network G = is a collection of N nodes and E ⊆ N × N undirected edges. P rof iles is a set of profiles, one for each node n ∈ {1, ..., |N |}. A social graph G u = is constructed from the social network G for each user u ∈ N , such that, the node set V = {∀n ∈ N |distance ≤ 2}. Nodes in G u consist of friends and strangers of u. Similarly, edge set R consists of all edges in G among nodes in V . Each node v ∈ V in a social graph will be associated with a feature vector f v ∈ F . Cells of f v correspond to profile feature values from the associated user profile in P rof iles. The goal of our model is to assign risk labels to friends according to the risk labels of their friends . As we stated before, risk labels of strangers depend on stranger features as well as mutual friends [3]. We do not assume that all friends can change users' risk perception in the same way. Some friends can make strangers look less risky and facilitate interactions with them . On the other hand, some friends can make strangers more risky . For example, if users do not want to interact with some friends, they might avoid friends of these friends as well. We will use positive and negative impacts to refer to decreases and increases in stranger risk labels, respectively. To understand whether friends have negative or positive impacts, our model must be able to know what risk label the stranger would receive from the user if there were no mutual friends. This corresponds to the case where the user given label depends only on stranger features. We will term this projected label as the baseline label, and show it with b us . For instance, assume that if there are no mutual friends, a user u considers all male users as very risky, and avoids interacting with them. In this case, the baseline label for a male stranger s is very risky, i.e., b us = very risky. However, if the same male stranger s has a mutual friend with user u, we assume that the user given label l us might not be equal to the baseline label b us , because the mutual friend might increase or decrease the risk perception of the user. This difference between the baseline and user given labels will be used to find out friend impacts. Finding baseline labels and friend impacts requires different approaches. In baseline estimates, we use logistic regression on stranger features, and for the friend impacts we use multiple linear regression [17]. Both of these regression techniques require many user given labels to compute baseline labels and friend impacts with high confidence. However, users are reluctant to label many strangers, therefore we have to exploit few labels to achieve better results. To this end, we transform our risk dataset, and use the resulting dataset in regression analyses. In the next sections, this transformation and regression steps will be described in detail. Overall, we divide our work into four phases as follows: 1) Transformation: Exploit the risk label dataset in such a way that regression analyses for baseline labels and friend impacts can find results with high confidence. With this step, we increase the number of labels that can be used to estimate baseline labels and friend impacts. 2) Baseline Estimation: Find baseline labels of strangers by logistic regression analysis of their features. 3) Learning Friend Impacts: Create a multiple linear regression model to find friends that can change users' opinion about strangers and result in a different stranger label than the one found by baseline estimation. 4) Assigning Risk Labels to Friends: Analyze the sign of friend impacts, and assign higher risk labels to friends who have negative impacts. IV. TRANSFORMING DATA By transforming the data, we aim at using the available data efficiently to find friend impacts with higher confidence. To this end, we first transform profile features of friends and strangers to use k-means and hierarchical clustering algorithms [9] on the resulting profile data. This section will discuss the transformation, and briefly explain the clustering algorithms. Our model has to work with few stranger labels, because users are reluctant to label many strangers. This limitation is also shared in Recommender Systems [16] where the goal is to predict ratings for items with minimum number of past ratings. In neighborhood based RS [11], ratings of other similar users are exploited to predict ratings for a specific user. Traditionally, the definition of similarity depend on the characteristics of data , and it has to be chosen carefully. We use profile data of friends and strangers in defining similar friends and strangers, respectively. Friend impacts of a user u is learned from impacts of similar friends from all other users. To this end, we transform profile data of friends and strangers in such a way that friends and strangers of different users are clustered into global friend and stranger clusters. Next sections will describe the aims and methods of friend and stranger clustering in detail. --- A. Clustering Friends Clustering friends aim at learning friend impacts for a cluster of friends. This is because we might not have enough stranger labels to learn impacts of individual friends with high confidence. To overcome this data disadvantage, impact of a friend f can be used to find the impacts of other friends who belong to the same cluster. For example, a user from Milano can have a friend from Milano, whereas a user from Berlin can have a friend from Berlin. Although these two friends have different hometown values , we can assume that both friends can be clustered together because their hometown feature values are similar to user values. This hometown example demonstrates a clustering based on a single friend profile feature and it results in only two clusters: friends who are from Milano/Berlin and friends who are from somewhere else. However, in real life social networks, friends have many values for a feature, some of which can be more similar to the user's value than others. For example, Italian friends of a user from Milano can be from Italian cities other than Milano, and these friends should not be considered as dissimilar as friends from Berlin. By considering these, we transform categorical friend values to numerical values in such a way that similarities between friend and user values become more accurate. Our transformation uses the homophily [15] assumption which states that people create friendships with other people who are similar to them along profile features such as gender, education etc. In other words, we assume that all friends of a user u can be used to judge the similarity of a social network user to u. For example, considering the case where the user u is from Milano, a social network user from Rome is similar to the user if the user has many friends from Rome. Moreover, we assume that different users will have similar clusters of friends, e.g., friends from user's hometown, alma mater etc. and friend impact values will be correlated with their corresponding clusters, e.g., friends from hometowns will have similar impact values. More precisely, the transformation of friends' data maps a categorical feature value of a friend, such as hometown:Milano, to a numerical value which is equal to the frequency of the feature value among profiles of all friends of a user. For example, if a friend f has profile feature value hometown:Milano, and there are 15 out of 100 friends with similar hometown:Milano values, hometown feature of f will be represented with 15/100 = 0.15. After applying this numerical transformation to all friends of all users, we compute a Social Frequency Matrix for Friends where each row represents numerical transformation of feature vector of a user's friend. Definition 1 : The Social Frequency Matrix associated with a social network G is defined as |N | × |F | × n, where N is the set of users in G, F ⊂ N is the set of user in G that are friends of at least one user u ∈ N , and n is the number of features of user profiles. Each element value of the matrix is given by: SF M F [u, f, v] = Sup |F u | where F u ⊂ F is the set of friends of u, Sup = {g ∈ F u | g v = f v } and f ∈ F u , whereas g v and f v show the value of profile feature v for users g and f , respectively. Having transformed friend data into numerical form, we can now use a clustering algorithm to create clusters of friends. After applying a clustering algorithm to the Social Frequency Matrix for friends, output friend clusters will be denoted by F C. --- B. Clustering Strangers By clustering friends, we can learn impacts of friends from different clusters, but this raises another question: do friends have impact on all strangers of users? Our assumption is that correlation between stranger and friend profile features can reduce or increase friend impact. For example, if a student user u labels friends of a classmate friend f , we might expect friends of f who are professors to have higher risk labels than student friends of f , because u might not want his/her professors to see his/her activities and photos. Here the work feature of strangers changes friend impact of f by increasing the risk label of professor friends of f . To see how friend and stranger features change friend impacts, we transform strangers' profile data to numerical data and cluster the resulting matrix just like we clustered friends. This clustered stranger representation helps us detect clusters of strangers for whom certain clusters of friends can change risk perception of users the most. Formally, we prepare a social frequency matrix as follows: Definition 2 : The Social Frequency Matrix for Strangers associated with a social network G is defined as |N | × |S| × n, where N is the set of users in G, S ⊂ N is the set of user in G that are strangers of at least one user u ∈ N , and n is the number of features of user profiles. Each element value of the matrix is given by: SF M S[u, s, v] = Sup |F u | where Sup = |{g ∈ F u | g v = s v }| and S ∈ N , whereas s v shows the value of feature v for stranger s. Note that we still use friend profiles in the denominator to transform stranger data. This is because we cannot see all strangers of a friend due to API limitations of popular social networks. To overcome this problem, we use friend profiles because we expect them to be similar to profiles of their own friends . We again use the Social Frequency Matrix for strangers to create clusters of strangers. We will denote stranger these stranger clusters by SC. --- C. Clustering Algorithms In our experiments, we used the k-means and hierarchical algorithms [9] to produce clusters of friends and strangers. This section will briefly explain these algorithms. In what follows, we will use data points and strangers/friends interchangeably to mean elements in a cluster. The k-means clustering algorithm takes the number of final clusters as input and clusters the data by successively choosing cluster seeds and refining the distance within cluster data points. The required input for the number of final clusters is usually unknown beforehand and this makes k-means unfeasible in some scenarios. However, in our model it gives us the flexibility to experiment with different sizes of clusters. k-means is also a fast clustering algorithm which suits our model for the cases where all friends of all users can reach a few thousands. In our experiments, we used different k values to find optimal performance. In hierarchical clustering 1 , a tree structure is formed by joining clusters and the tree is cut horizontally at some level to produce a number of clusters. In friend and stranger clustering, choosing the number of final clusters or the horizontal level requires some trade-offs. The advantage of using many clusters is that data points in each cluster are more similar to each other . On the other hand, too many clusters decreases the average number of data points in a cluster, and our model may not be trained on these clusters with high confidence, i.e., there may not be enough data points in a cluster to prove anything. Using too few clusters also has a disadvantage. Final clusters may contain too many data points that are not very similar to each other. This decreases the quality of inferences because what we infer from some data points might not be valid for others in the same cluster. Despite this, if data points are naturally homogeneous, the similarity among data points in a big cluster can be high. As a result, a big cluster may offer more data to prove our inferences with more confidence. After transforming our data and creating friend and stranger clusters, we will now explain baseline label estimation for stranger clusters. Baseline estimation analyzes how feature values on stranger profiles bring users to assign specific risk labels to strangers. The baseline estimation process results in baseline labels for each stranger s ∈ S. These labels are found by using statistical regression methods on already given user labels and stranger profile features. In this section we will discuss this process. Baseline estimation corresponds to the case where a user would assign a risk label to a stranger without knowing which one of his/her friends are also friends with the stranger. Figure 1 shows an example of baseline estimation. In the figure, each stranger s ∈ S u is a node surrounded by a ring representing his/her feature vector f s . Each cell in the feature vector corresponds to a feature value of the stranger . Different colors for the same cell position represent different values for the same feature on different stranger profiles. In the example shown in Figure 1 strangers S 2 , S 4 and S 5 are labeled with 2 . These three strangers share the same feature vector as shown with the same colored cells. Based on these observations, if any stranger has the same feature vector with S 2 , S 4 and S 5 , the stranger will be given label 2. The evidence to support this statement comes from the three strangers , and the number of such strangers determine the confidence of the system in assigning baseline labels. Although in Figure 1 stranger features are shown to be the only parameter in defining stranger labels, in our dataset labels of strangers have been collected from users by explicitly showing at least one mutual friend in addition to the stranger feature values. Because of this, stranger labels that are learned from users can be different from baseline labels; they can be higher or lower depending on the friend impact. Considering this, in baseline estimation we use the labels of strangers who have the least number of mutual friends with users. These are the subset of labels which were given to strangers who have only one friend in common with users, i.e., for user u and stranger s, |F u ∩ F s | = 1. In what follows, we will use first group dataset to refer to these strangers. In our approach, we use logistic regression to learn the baseline labels from available data. This allows us to work with categorical response variables . Stranger features are used as explanatory variables and risk labels as the response variable which is determined by values of explanatory variables . Although the response variable has categorical values, it can be considered ordinal because risk labels can be ordered as not risky , risky and very risky . Ordinary Logistic Regression is used to model cases with binary response values, such as 1 or 0 , whereas multinomial logistic regression is used when there are more than two response values. As multinomial logistic regression a basic variant of logistic regression, we will first start with the definition of logistic regression. For this purpose, assume that our three risk labels are reduced to two . Suppose that π represents the probability of a particular outcome, such as a stranger being labeled with risky, given his/her profile features as a set of explanatory variables x 1 , ..., x n : P = π = e 1 + e where 0 ≤ π ≤ 1, X k is a feature value, α is an intercept and βs are feature coefficients, i.e., weights for feature values. The logit transformation log[ π 1-π ] is used to linearize the regression model: log[ π 1 -π ] = α + β k X k By transforming the probability of the response variable to an odd-ratio , we can now use a linear model. Given the already known stranger features and labels, we use Maximum Likelihood Estimation [18] to learn the intercept value and the coefficients of all features. Although standard binary logistic regression and multinomial logistic regression use the same definition, they differ in one aspect: multinomial regression chooses a reference category and works with not one but N -1 log odds where N is the number of response categories. In our model, N = 3, because the response has three labels . In both binary and multinomial logistic regression, intercept and coefficient values are found by using numerical methods to solve the linearized equation. With the found values, we can write the odd ratio as an equation. For example, in equation log[ π 1-π ] = 0.7 + 1.2 × X 1 + 0.3 × X 2 , the intercept value is and feature coefficients . We can then plug in a new set of values for features, and get the probabilities of response value being one of three labels. For example, for a specific stranger, the model can tell us that risk label probabilities of the stranger is distributed as %0.9 very risky, %0.09 risky and %0.01 not risky. As we can compute baseline label in real values, a stranger s ∈ S is assigned a baseline label by weight averaging the probabilities of risk labels. --- VI. FRIEND IMPACT So far, we have discussed clustering and baseline label estimation. In this section we will first discuss how these two aspects of our model are combined to compute friend impacts. After finding friend impacts, we will discuss how risk labels can be assigned to friends by considering the sign of impact values. In computing friend impacts, we use multiple linear regression [17], which learns friend impacts by comparing baseline and user given labels to strangers. To this end, we define an estimated label parameter to use in linear regression as follows: Definition 3 : For a stranger s and a user u, an estimated label is defined as: lus = b us + F Ci∈F C F I × P ast where lus and b us are estimated and baseline labels for a stranger s, and s belongs to the stranger cluster SC j ∈ SC. Friend clusters F C are found by applying a algorithm to the mutual friends of user u and stranger s. P ast denotes an intermediary value based on stranger labels given by user u, whereas F I represents impact of a friend f from a friend cluster F C i on the label of stranger s from a stranger cluster SC j . In the rest of this section, we will define the P ast and F I parameters, and explain how they are used to compute friend impacts. --- A. The Past Labeling Parameter We start by discussing the past parameter P ast which returns a value from past labellings of strangers by user u. The past parameter is traditionally used in recommender systems to adjust baseline estimate [16]. The need for this parameter arises from the fact that baseline estimation is computed from labels of all strangers who have only one mutual friend with user u , and it tends to be a rough average. To overcome this, a subset of strangers, who are very similar to s and who have been labeled in the past by u, are observed and the baseline label is increased or decreased to make it more similar to the user given labels of these strangers. In defining the past parameter, we consider two factors: how many similar strangers should be considered in this adjustment and what is an accurate metric for finding similarity of two strangers? For the first question, we use the computed stranger clusters. For a stranger s, similar strangers from the first group dataset are those that are labeled by the same user u, and that belong to the same stranger cluster with s. Although we use stranger clusters to choose similar strangers, the similarity of strangers in a cluster can be low or high depending on the clustering process. With too few clusters and too many clusters, similarity of strangers in a cluster can be low and high respectively. We adjust the baseline labels by considering labels given to most similar users. To this end, we use the profile similarity measure by Akcora et al. [2]. This measure assigns a similarity value of 1 to strangers with identical profiles, and for non-identical profiles the similarity value is higher for strangers whose profile feature values are more common in profile features of u's friends. Formally, we define the past labeling as follows: Definition 4 : For a given user u and stranger s, the past labeling parameter is defined as: u f 3 f 1 f 2 s 1 FC2 FC2 FC1 Multiple impacts for a friend cluster. u f 3 f 1 f 2 s 1 FC1 FC2 Single impact for a friend cluster. Fig. 2. Friend impact definitions by considering the number of friends from the same cluster. In the single impact definition, two friends do not increase the friend impact. P ast = 1 |SC i | x∈Ci P S × where P S denotes the profile similarity between two strangers, l ux is the user given label of stranger x, and b ux is the baseline label of x. Strangers s and x belong to the same stranger cluster C i . --- B. The Friend Impact Parameter The second parameter from Definition 3, F I, is used to show impacts of mutual friends on the risk label given to s by u. In modeling friend impacts, we wanted to see how friends from different clusters changed the baseline label. By using this approach, we explain impacts of friend clusters in terms of friend features that shape friend clusters. If there is at least one mutual friend from a friend cluster F C i , we say that friend cluster F C i may have impacted the label given to the stranger s. For the cases where a stranger s has two or more mutual friends from a friend cluster F C i , we experimented with both options for F I. Next, we will explain these options. 1) Multiple Impact for the Friend Cluster: In our first approach, we assume that a bigger number of mutual friends from friend cluster F C i ∈ F C will impact user labeling. Assume that from a friend cluster F C i ∈ F C, we are given a set of mutual friends M F i = {∀f |f ∈ F C i , f ∈ {F u ∩ F s }} of user u and stranger s. We define the impact of friend cluster F C i on the label of stranger s ∈ SC j as follows: F I 2 = |M F i | × I F Ci,SCj where I F Ci,SCj is the impact of a cluster F C i |f ∈ {F C i ∩ M F i } on the label of stranger s ∈ SC j . Note that this impact is the unknown value that our system will learn. 2) Single Impact for the Friend Cluster: In the second approach, we assume that a bigger number of friends from the same cluster does not make a difference in user labeling; at least one friend from the cluster is required, but more friends do not bring additional impact. This approach is shown in Figure 2, where friends are shown with their cluster ids, and two friends from friend cluster F C 2 bring a single impact. Assume that from a friend cluster F C i ∈ F C, we are given a set of mutual friends of user u and stranger s. We give the impact of friend cluster F C i on the label of stranger s as follows: F I 1 = I F Ci,SCj where I F Ci,SCj is the impact of a friend cluster F C i on label of stranger s ∈ SC j . These different friend impact approaches change the model by including different numbers of friend impacts. The unknown impact variable I F C * ,SC * is learned by the least squares method [10]. The least squares method provides an approximate solution when there are more equations than unknown variables. In our model, each stranger's label provides an equation to compute impacts of k 1 friend clusters on k 2 stranger clusters . In Example 6.1, we will explain these points and give equations of one stranger for single and multiple impact definitions. Example 6.1: Given a stranger s 1 ∈ SC 1 who is labeled by u, assume that the user given label l us1 = 2.3, while the baseline label is b us1 = 2.7. Again assume that P ast = -0.2. Equations for the stranger s with single and multiple friend impact definitions are respectively given as follows: 2.3 = 2.7 + × -0.2 2.3 = 2.7 + × -0.2 After choosing one of these definitions of friend impact, we input one equation for each stranger s to the least squares method to compute impact values of friend clusters on stranger clusters. In the experimental results, we will discuss the definition that yielded the best results. --- VII. FRIEND RISK LABELS Learning impact values allows us to see the percentage of positive and negative impacts for each friend cluster. Negative impact values for a friend cluster shows that the friend cluster increases the risk label of strangers. Depending on a user's choice, friend clusters which have negative impacts less than x% of the time can be considered not risky. Similarly, a threshold y% can be chosen to determine very risky friend clusters. In our experiments, we heuristically chose x = 20 and y = 50. With these threshold values for risk labels, we formally define the risk label of a friend f as follows: Definition 5 : Assume that the percentage of positive and negative impact values for a cluster F C i ∈ F C are denoted with Im + i and Im - i respectively, where Im + i + Im - i = 1. We assign a risk label to a friend f who is a member of the friend cluster F C i according to the negative impact percentage of the friend cluster F C i as follows: l =    not risky if Im - i < 0.2 risky if 0.2 ≤ Im - i < 0.5 very risky if Im - i ≥ 0.5    Next we will give the experimental results of our model performance. --- VIII. EXPERIMENTAL RESULTS In this section we will validate our model assumptions, and then continue to give detailed analysis of performance under different parameter/setting scenarios. --- A. Validating Model Assumptions Before finding friend impacts, we validated our model assumption by using logistic regression on the whole dataset . For this, we included the number of mutual friends as a parameter, and computed the significance2 of model parameters. In overall regression, photo visibility, wall visibility, education and work parameters were excluded from the model because they were found to be nonsignificant. For significant parameters, P r values are shown in Table I. In the regression, there are two friend related parameters: the number of mutual friends and the friendlist visibility. Differing from the number of mutual friends, friendlist visibility is a categorical variable which takes 0 when the stranger hides his/her friendlist from the user and 1 otherwise. From Table I 3 , we see that seeing a stranger's friendlist increases the probability of the stranger getting label 1, whereas it is not an important parameter for label 3. Our main focus in regression analysis was to verify that the number of mutual friends parameter is significant. We found that an increasing number of mutual friends indeed helps a stranger get label 1, and decreases the probability of getting label 3. This result tells us that friends have an impact on user decisions and our assumption about the existence of friend impacts holds true. After validating our model assumption, we continue to the baseline label estimations. --- B. Training for Baseline Baseline calculation predicts labels for strangers without friend impacts. For this purpose we take strangers who have one mutual friend with users into a new dataset , and train a logistic regression model. Logistic regression on the first group dataset finds how stranger features bring users to label strangers. Table II shows model parameters and their corresponding p-values. In Table II, we see that when users label the first group strangers, photo and wall visibility are significant parameters. If these items are visible on stranger profiles, the probability of strangers getting label 1 increases. In the whole dataset , these two parameters were found to be insignificant. Another interesting result is that locale4 is significant for label 3 whereas it is non/significant for label 1. A high locale value means that the stranger is similar to existing friends of users, but this high similarity is shown to increase the probability of strangers being labeled as very risky, i.e., receiving label 3. After computing a baseline label for all strangers, we use the difference between user given and baseline labels to model the friend impact. These differences are shown in Figure 3. In the figure, we see that user given labels are lower than the computed baseline label, which shows that in overall friends have positive impacts . Overall, we found that there is not a linear relation between the number of mutual friends and the deviation values. This non-linearity changes how we define the impacts of friend clusters. In Section VI we gave two definitions for friend impacts to account for deviations from the baseline label. In multiple friend impacts we assumed that more mutual friends from a friend cluster bring additional impacts. On the other hand, in single friend impact one friend was enough to have the impact of a friend cluster. This finding implies that more friends of the same cluster do not provide any benefits to strangers on Facebook and mutual friends from different clusters are more suitable to change the user's risk perception about a stranger. We believe that this can be generalized to other undirected social networks. In the rest of the experiments, we will give the results computed by using the single friend impact definition. We will now explain the model performance under different clustering settings. --- C. Clustering For clustering 12659 friends, and 4013 strangers we experimented with k-means and hierarchical clustering algorithms. In our experiments with different numbers of final clusters, the k-means algorithm yielded the best results for friend clustering, whereas hierarchical clustering was better for stranger clustering. Due to space limitations, we will omit hierarchical clustering results for friends and k-means results for strangers. Friend Clustering: In Figures 4 and5, we show the adjusted coefficient of determination 5 of our multiple regression model with different k values for friend clustering. The x-axis gives the number of stranger clusters for which at least one friend cluster has an impact. In Figure 4 we see the performance for maximum and minimum number of friend clusters. For k = 2, friend clusters are very roughly clustered, and each cluster is not homogeneous enough to mine friend impacts 6 . As a result, we can observe friend impacts on very few clusters. For k = 9, friend clusters are more homogeneous, but in this case our multiple regression model does not have many data points to learn the impacts of friend clusters. Figure 5 shows the results for k = 5, 6, 7 values. For two k values, 5 and 6, we have the best results. Our model hence suggests that friends of social network users can be put into 5 or 6 clusters when considering how much they can affect user decisions on stranger labeling. Stranger Clustering: In Figure 6 we show how the R 2 values change for the biggest and smallest numbers of stranger clusters. With 8 stranger clusters, our model can detect friend cluster impacts on 5 out of 8 stranger clusters only, whereas for 158 clusters the number is 15 out of 158. For 158 stranger clusters, R 2 values are generally low because strangers are distributed into too many clusters, and each stranger cluster does not have many data points to learn from. Although finding impacts on 5 out of 8 stranger clusters seems like a good performance, low R 2 values show that the model can explain less than 50% of the variation in data. In Figure 7 we see that more stranger clusters can improve the model performance and this leads to R 2 values close to 1. For 26 stranger clusters, R 2 values are better, and we can find friend impacts in 16 out of 26 stranger clusters. Cross Validation: A major point in statistical modeling is the response to out of sample validation; a statistical model can be over-fitted to the training data, and it can perform poorly when applied to new testing data. After clustering and prior to learning friend cluster impacts, we prepare a test set for validating our model. We remove 10% of strangers from stranger clusters and set those aside as the test strangers . Once friend impacts are found for stranger clusters, we plug in the set of test strangers, and calculate the root mean square value of their labels. RMSE for a stranger s and user u is defined by using the predicted label Lus and user given label L us as RM SE = s∈T --- |T | . Cross validation results for different numbers of stranger clusters is detailed in Table III by using 6 friend clusters. The first row of the table shows the number of stranger clusters, whereas the second row shows the average R 2 values in these clusters. In the third row, we show the median size of stranger clusters; with increasing numbers of clusters, the number of strangers in each cluster decreases. In the case of 158, the average number of strangers in a cluster is reduced to 7, and this results in a poor performance because the model cannot have enough data to learn friend impacts on stranger clusters. The average number of validation points are shown in the fourth row. An increasing number of stranger clusters results in fewer validation points because some clusters will have less than 10 strangers themselves. In the fifth row, the root mean square values are shown for these validation points. In 26 stranger cluster our model yields the best R 2 and RM SE pair results. These experimental results suggest that the optimal number of stranger clusters is bigger than the optimal number of friend clusters . We explain this by the fact that although users can choose friends of specific characteristics, they cannot do so with strangers. As a result, strangers are more diverse than friends, and they need to be clustered differently from friends. --- D. Friend Impacts and Risk Labels In this section we will give computed friend cluster impacts, and show how friends are assigned risk labels. The rationale behind clustering was to observe different friend cluster impacts on different stranger clusters. Although a 8, when we increase the number of friend clusters from k = 5 to k = 6, positive and negative impact frequencies change for each cluster because either friend clusters became more homogeneous or some clusters did not have enough data points to learn from. Figure 8 shows two friend clusters with overall negative impacts . Figure 8 shows the positive and negative impact frequencies for k = 7, where frequencies are more emphasized for negative and positive impacts of a cluster. Note that the number of overall negative clusters is reduced from 2 to 1 here. Similar to a transition from 5 to 6 clusters, friends of two negative clusters might be put into the same cluster or there were no longer enough strangers for some friend clusters to learn a negative impact. The existence of both positive and negative impact values for each friend cluster confirms our intuition that impacts of friend clusters vary depending on a stranger cluster. A friend is assigned a higher risk label when a friend cluster has a big percentage of negative impact values. In Section VII, we gave definitions of friend risk labels according to two threshold values of negative impact percentages. By using k=6 friend clusters, from Figure 8 we see that friends from friend clusters 1 and 6 are labeled as very risky because the negative impact percentages for the clusters are > 0.6. In the figure, we also see that none of the clusters have < 0.2 negative impacts, hence no friends cluster is said to be not risky . We tested the accuracy of our risk definition for friends by observing 261 deleted friendships of users. As a performance measure, we assumed that the deleted friends should come from friends who are labeled as very risky, i.e., friends who belong to the 1st and 6th clusters. We have found 117 of the 261 deleted friends were found to belong to the 1st and 6th friend clusters. Although we chose to use specific values for very risky and not risky label thresholds in assigning risk labels to strangers, our model can ask social network users to define these threshold values on their own. With this approach, our risk model for friends can be personalized by users and applied to privacy settings on social networks. --- IX. CONCLUSION AND FUTURE WORK In this work, we looked into risks of friendships and analyzed how the risk labels of friends of friends can be used to compute risk labels of friends. We found that the number of mutual friends is not very important to change the risk perception of a user towards a friend of friend. On the other hand, having different types of mutual friends with a friend of friend plays a bigger role in users' risk perception. Our results showed that in terms of risk, friends can be grouped into 6-7 clusters, whereas the number of groups for strangers can reach 26 or more. These results show that even though user numbers reach millions, friends for each user have similar roles. We have validated risk labels of friends on deleted Facebook friendships, and showed that risks of friendships can indeed be learned by considering users' risk perception towards friends of friends. In the future, we want to create sets of global privacy settings by using our risk model, so that privacy settings can be automatically applied to different social network users.
In this paper, we explore the risks of friends in social networks caused by their friendship patterns, by using real life social network data and starting from a previously defined risk model. Particularly, we observe that risks of friendships can be mined by analyzing users' attitude towards friends of friends. This allows us to give new insights into friendship and risk dynamics on social networks.
Background Appropriate feeding of 6-23-month-old children enhances their chance of survival and promotes optimal growth and cognitive development. It is recommended that infants should be breastfed within 1 hour of birth, breastfed exclusively for the first 6 months of life, and continue to be breastfed up to 2 years of age and beyond [1]. Starting at 6 months, breastfeeding should be combined with safe, age-appropriate feeding of solid, semisolid and soft foods [1]. Although adopting optimal feeding practices is fundamental to a child's survival, growth, and development, few children benefit from these practices as caregivers often lack practical support, the time to take care of children, one-to-one counselling, and access to the correct information. Malnutrition is the result of a complex interplay between household factors like poverty, maternal-health literacy, diarrhoea, cooking fuel, home environment, dietary practices, and hygiene. The prevalence of these influences is a result of multiple socio-ecological factors including resources, awareness, and cultural and behavioural practices [2,3]. UNICEF acknowledges the role of different determinants of maternal and child nutrition and their interconnectedness for improved outcomes [4]. Infant and Young Child Feeding practices are a set of recommended caregiver practices to ensure that infants and young children receive the nutrition and care they need for optimal child survival, growth and development. Addressing IYCF practices is vital to attain the Sustainable Development Goals 2.1 and 2.2 , that target ending all forms of malnutrition by 2030 [5]. The World Health Organization has proposed eight core indicators for assessing IYCF practices in population-based surveys, out of which three are key complementary-feeding indicators. To reach Minimum Dietary Diversity , children aged 6-23 months must have consumed something from 4 or more food groups the previous day, out of the standard 7 food groups. To achieve Minimum Meal Frequency , however, breastfed and non-breastfed children 6-23 months of age must have received solid, semi-solid, or soft foods the minimum number of times or more in a day. Without adequate dietary diversity and meal frequency, infants and young children are vulnerable to malnutrition especially stunting, micronutrient deficiencies, and increased morbidity and mortality. The Minimum Acceptable Diet is a composite indicator which combines MDD and MMF and assesses the adequacy of a child's diet based on its micronutrient adequacy and meal frequency during the previous 24 h. Any child whose diet meets the MDD and the MMF criteria is considered to have a MAD [6]. Poor nutrition during the first 2 years of life has long term consequences. During this period, incidence of stunting is highest due to children having a high demand for nutrients. In addition to disease prevention strategies, complementary-feeding interventions are most effective in reducing malnutrition and promoting the growth and development of children [8]. Achieving universal coverage of optimal breastfeeding practices could prevent 13% of deaths in children less than 5 years of age, whilst appropriate complementary-feeding practices could result in an additional 6% reduction in under-five mortality [9]. In developing countries, breastfed children are at least six times more likely to survive in the early months than children who are not. Inadequate growth in children in low-income countries is generally the consequence of infectious diseases and low nutrient intake, particularly inadequate energy and protein intake, relative to nutritional requirements [10]. In India, there are various programmes and initiatives aimed at improving child nutrition with a particular focus on the 'first 1000 days approach' , such as the Integrated Child Development Services, National Health Mission, Mid-Day Meal Scheme, Targeted Public Distribution System, and National Food Security Mission. Recently the although access to animal milk in the house did not translate to an increase in consumption of milk/milk products by a child. Government of India also launched the National Nutrition Strategy and the National Nutrition Mission . The National Nutrition Mission provides an updated strategic framework for action to improve nutritional outcomes by holistically addressing the multiple determinants of malnutrition, through cross-sectoral convergence and contextualized planning at each level of the implementation process [11]. Despite these initiatives, India's performance in complementary-feeding practices is sub-optimal. This has been highlighted by the recent Comprehensive National Nutrition Survey for 6-23-month-old children, which has demonstrated that only 6% of children are receiving MAD , 9% receiving iron-rich foods , 21% meeting MDD and 42% meeting MMF [12]. The National Family Health Survey-4 [13] findings of 2015-16 also reported that less than one in ten children aged 6-23 months received an adequate diet. This figure includes 14.3% of non-breastfeeding children and 8.7% of breastfeeding children, with urban performance slightly better than rural. Comparison between NFHS-3 [14] and NFHS-4 also indicates that the complementary feeding rates have declined in India from 52.6% to 42.7%. Disaggregated data shows that there was intra-country variation in the feeding rate, with the highest decline observed in some of the southern states that have comparatively better-performing health systems [15]. There exists ample evidence of the influence of socioeconomic inequalities, particularly economic status of the household on childhood malnutrition [16], but the evidence is limited on the impact of socio-economic variations on IYCF practices, particularly that of gender and parental migration. Within a household, food security and access to nutrition is closely related to decisions regarding responsibility for food production; earning cash; purchasing and preparation of food; and finally, access to food in terms of consumption [17]. In this regard, gender inequality plays an important role in skewing the distribution of food against the female child. Also, females are vulnerable to severe forms of malnutrition across all ages due to socio-cultural factors . Undernourished girls grow up to become undernourished women who give birth to a new generation of undernourished children [3]. In households living in poverty, women and girls are particularly disadvantaged in their access to household resources, including food and nutrition [18]. Gaps in water and sanitation services often mean that women spend 1-2 h every day collecting water and/ or walking to open fields, further reducing time available for food preparation. Poor Water, Sanitation and Hygiene services also increase pathways for disease transmission, some of which can reduce the ability of children to absorb nutrients. There is also growing evidence of discrimination against females for IYCF practices from India, including both breastfeeding and complementary feeding. A secondary analysis of the national data shows that girls are breastfed for shorter periods than boys and consume less milk, with the discrimination seen particularly in second-born girls [19]. The association between parental migration for work and child nutrition is mixed. On one hand, decreased parental time within a household results in a negative effect on child nutrition since parental time is important for food preparation or making sure that the child eats food that is nutritious. On the other hand, the money earned by migrated parents can positively affect child nutrition, as any added income is likely to improve a child's nutrition by relieving income constraints [20]. Cameron and Lim found that having a parent out of home for work has a negative effect on short-term child nutrition, while they also found that providing families with remittances of over $200 can help to lessen the negative effect on child nutrition [21]. In households with multiple migration cycles, left-behind children are gaining recognition as a new and unique vulnerable group facing the consequences of early-initiated, inadequate, and low-quality complementary feeding [22]. The growing number of male migrants and episodes of migration have led to feminization of agriculture and waged labour, with consequent challenges for childcare and feeding [15,23]. Thus, there seems to be no conclusive empirical evidence on the direct association between parental migration and child nutrition and literature on the association between parental migration and IYCF practices is quite limited. Over the last decade, several studies have highlighted the need for interdisciplinary research to examine the association between undernutrition in children under five and different factors operating at household, community, government or health-system levels [15,24,25]. We conducted one such study, the Participatory Approach for Nutrition in Children: Strengthening Health, Education, Engineering and Environment Linkages study funded by the Medical Research Council, UK. The study was designed to: explore health, education, engineering and environmental factors that influenced IYCF practices and develop a socio-culturally appropriate, tailored, innovative, and integrated cross-sector HEEE package to support optimal IYCF practices through a community-led participatory approach [26]. Disaggregated data from the NFHS-4 points to the state of Rajasthan as having one of the lowest IYCF indicators in the country . Within the state, Banswara district is known to be a predominantly tribal area and has IYCF indicators worse than the state average and than many of the other 32 districts. It was for this reason that Banswara was chosen as the study site. While poor socio-economic status of women is associated with poor child nutritional outcomes, the presence of high outmigration of fathers leads to feminisation of agriculture and hence less time being available to attend to feeding their children, thereby compromising their children's growth and nutrition. With this in mind, we conducted a study to assess the variations in Diet Diversity and IYCF practices according to gender of the child and parental migration, and to test the association between HEEE characteristics of the household with three key IYCF measures of MMF, MDD and MAD. The study, conducted as part of the PANChSHEEEL project, involved a cross-sectional survey of 325 households across nine villages in the two program blocks in, Banswara district, Rajasthan, India, where a child or children aged 6-23-month-old was present. The specific study objectives were: 1. To understand the HEEE characteristics of the households, including gender; parental migration; access to local resources ; sanitation; energy; health; and educational practices. 2. To measure DD in terms of food and food groups consumed by children aged 6-23 months in the previous 24 h and record their IYCF practices. 3. To estimate the influence of two effect-modifiers on the different outcome variables of DD and IYCF practices 4. To measure the strength of association between the three key complementary-feeding measures and effect-modifiers, with and without adjustment for other HEEE and background characteristics. --- Methods --- Study area The household survey was conducted in the two community-development blocks situated in Banswara district of Rajasthan state in India. The two blocks of Ghatol and Kushalgarh were purposively selected since they represented two divergent agroeconomic dimensions within one district. Out of 637 villages in these two blocks, five villages were from the canal area of Mahi river, Ghatol block and four villages from the non-canal area of Kushalgarh block, selected for the survey based on a set of inclusion and exclusion criteria [26]. --- Study design The PANChSHEEEL study used a mixed-method study design consisting of both qualitative and quantitative data-collection techniques. The study was driven by the 'socio-ecological model' which entails a multisectoral approach for operationalizing IYCF practices at different levels [27]. The study also engaged a multi-disciplinary approach for obtaining data on IYCF knowledge and practices and for establishing interlinkages between different HEEE factors and IYCF practices. The quantitative survey was conducted from January to March 2017, covering households with a child aged 0-23-months on the day of the survey. Details of the qualitative data-collection methods used by this study are published elsewhere [28]. --- Sample size, participants and tools The sample size of households for the PANChSHEEEL study was designed to provide project-level estimates on HEEE and IYCF indicators. Assuming a true value of an indicator at 50%, with a 5% margin of error at 95% confidence level and a non-response rate of 15%, the minimum sample required for the study was estimated as 445 households with a child aged 0-23 months. All the households in the nine program villages where mothers had a child aged 0-23 months were included in the survey provided the child was 0-23 months old on the day of the survey, both the child and the child's mother were physically present at home , their details were part of the household roster developed by frontline health workers , and informed verbal consent was provided by both the mother and the head of the household. Twins and households with more than one child under 2 years were treated as separate subjects. Children who were permanent residents of these nine villages but were not available in the village on the day of data collection were excluded. Since the key outcome indicators of the present manuscript are the three complimentary feeding practices of MDD, MMF and MAD, the sample was limited to 325 children aged 6-23 months. The estimated power of 6-23 month old sample is 0.98 with p 0 = 0.5 and p 1 = 0.6, with 95% confidence interval . The list of lactating mothers maintained by the frontline health workers in the village formed the basis for identifying households where there was a mother of a 0-23-month-old child. The list taken from the frontline health workers was further validated during interviews with mothers using a 'snow-balling technique' to prepare the final household list. The primary respondent for obtaining all child-related information was the mother. Other information on socio-demographic characteristics and details on HEEE domains of the house were obtained from the head of the household , identified by the investigator while collecting demographic profiles of the household members. The survey tool was developed by drawing on existing questions that had been tried and tested in other cross-sectional surveys in India [13,14], but also contained study-specific questions developed and finalized by subject experts from the PANChSHEEEL team. This multi-disciplinary team comprised of academics who were subject experts, staff from Save the Children with expertise in field implementation, and study investigators from the community who facilitated the participation of the local community throughout the study process. For data on socio-economic status, a modified Kuppuswami scale was used [29] while questions on food and IYCF were taken from the WHO-IYCF tool [6]. The finalized tool was translated into the local regional languages of the area , and pre-tested by the research team in one non-program village in each of the two blocks. Based on the feedback and learnings from the pre-testing, the questionnaire was revised with the addition/deletion of some questions and refinement of the language by replacing technical words with specific terms which were used more commonly in the local language. Four members of the research team and one supervisor were trained for 2 days followed by 1 day of field practice and de-briefing. The HEEE related sections of the tool were first administered by interviewing the head of the household by the male investigator, while IYCF related sections of the tool were administered later by the female investigator. All the investigators were conversant in both the local languages and had previous experience of data collection. Interviews were done in the local languages and investigators were sensitive to cultural gender differences. As the investigators belonged to the local community, they had good understanding of the local customs, practices, and perspectives which helped them to engage/communicate effectively with the respondents. The structure of the PANChSHEEEL team comprised of the main study team and the survey team, both teams working in close partnership to learn from each other, thereby strengthening the quality of data. The survey was carried out during the lean agricultural season to ensure the availability of the respondents at home. The field datacollection process was supervised by one full-time supervisor, and fortnightly meetings of all the investigators and the supervisor were held to review the coverage and quality of the data collected in the previous weeks by comparing sample completed questionnaires. After verifying the list of lactating mothers with children aged 0-23 months provided by ASHA and AWW, the data-collection team successfully identified and interviewed 445 households with mothers with a child aged 0-23 months on the day of data collection. However, for the current manuscript we have confined our analysis only to 325 children as the focus of manuscript is to understand diet and IYCF practices among 6-23-month-old children. The primary outcome for the program was IYCF indicators, however for this article we have taken the three complementary-feeding practices of MDD, MMF and MAD as key outcome indicators. A number of independent variables were first included in bivariate analysis to test their association with the three IYCF practices. Out of the 27 independent variables included in the initial analysis, gender and parental migration were identified as effect-modifiers and their association was then tested with the three IYCF practices . To independently test the association of the two effect modifiers, stepwise logistic regression analysis was conducted using 14 other covariates. This analysis excluded the covariates that were highly skewed towards a single response and ensured that each of the HEEE domains was represented by at least one covariate. All IYCF indicators were estimated using the standard definitions [6]. --- Statistical analysis All analyses were done using the SPSS statistical software package Version 22 and Stata Version 14. Standard descriptive statistics are presented as percentages and means with standard deviations, while types of food consumed and IYCF factors are presented as percentages with 95% CI. The strength of association between the explanatory variables and the outcomes was presented as an unadjusted Odds Ratios and tested using a Chi-square test with a p-value of < 0.05 for significance testing. The two effect modifiers were included in the regression model, while backward stepwise regression was used to exclude other factors which showed weak association. Adjusted OR results were only presented for the covariates that were part of the final regression model. Standard ethical processes were followed, and formal ethical clearance was taken from the Ethics Committee of University College London UK, and from the Sigma-IRB in India. No one apart from the principal investigators /Co-PIs were given access to anonymised raw data, after removing personal identifiers. --- Results --- Socio-environmental profile Out of a total of 445 households with a child aged 0-23-months covered by the survey, 325 households had a child aged 6-23 months, and the findings of this manuscript are confined to these children. Table 2 provides the HEEE and background characteristics of these 325 households. There were equal numbers of children from each block with nearly equal gender representation . Parental migration for work in the past 1 year was nearly four times higher in the Kushalgarh block as compared to Ghatol block , implying that migration-related differences in IYCF practices were mostly confined to Kushalgarh. Variation was observed in terms of access and utilization of different healthcare services which are provided free of cost under the national programmes. While 95% of children had Mother and Child Protection cards, only 57% of children aged 12-23 months were fully immunized , 66% received Vitamin-A supplementation, and only 26% received deworming tablets during the past 6 months as part of the national deworming programme. Households were predominantly dependent on agriculture as their main source of income ; half of them had access to safe drinking water within the house ; one in five households had improved sanitation facilities ; and fewer houses had clean cooking fuel . A large number of households had access to animal milk, with 85% of households having cow/bull/buffaloes, 60% having goats, and more than half having both types of animals. Three-quarters of the households were from the scheduled tribe , around one-third had poor income and 62% of households had two or more children under the age of 5 years. --- Breastfeeding Prevalence of breastfeeding-related IYCF practices was much higher compared to diet-related practices in children aged 6-23 months . Breastfeeding was initiated within 1 hour of birth in nearly 60% of children and 94% were exclusively breastfed for the first 6 months, though few children were given pre-lacteal feeds immediately after birth. Breastfeeding was continued till 12-23 months of age, and in 80% of children aged more than a year. Semi-solid foods were introduced in only two-out-of-five children aged 6-8 months. While more than half of the children had access to meals more than 4 times a day, access to a diverse diet and adequate diet was very poor . Of the seven major food groups considered for dietary diversity, grains in any format were the most common food group consumed by the majority of children during the previous 24 h, followed by milk and dairy products , and legumes and nuts . Although 59-85% households had access to animal milk , consumption of milk/dairy products by children was only 38% . Consumption of eggs and vitamin-A-rich fruits and vegetables was very low, though around 12% of children had consumed other fruits and vegetables in the previous 24 h. Consumption of convenience foods and tea was high among the children , whilst a negligible number of children had consumed iron rich foods like meat and fish in the previous 24 h. Table 4 presents the gender-based differentials of dietary and nutritional practices in terms of unadjusted ORs and Chi-square values. Consumption of each of the seven standard food groups was higher amongst boys than girls, although the difference was statistically significant only in relation to the consumption of eggs. On the contrary, consumption of food items such as rice, tea, biscuits and roti/chapati were higher among girls as compared to boys, but the difference was statistically significant only in the case of roti/chapatis. All the IYCF practices were higher amongst boys, but it was access to a diverse diet and to an adequate diet which were significantly higher for boys as compared to girls. Table 5 provides parental migration-based differentials in dietary and IYCF practices. Among the standard food groups, consumption of milk, eggs, fruits, and vegetables were higher amongst non-migrant households than migrant households, but the difference was statistically significant only for milk consumption. Interestingly, consumption of popular food items such as rice, tea, biscuits and roti was higher amongst migrant households than their non-migrant counterparts, with consumption of roti/chapati being significantly higher among migrant households. There was not much difference between the proportion of children from migrant and non-migrant households who followed breastfeeding-related IYCF practices; however, the proportion of children following diet-related MDD & MAD practices was higher for nonmigrant households, although the differences were statistically not significant. Initiation of complementary feeding between 6 and 8 months of age was significantly higher in non-migrant households than migrant households, while MMF was significantly higher in children from migrant households compared to non-migrant households. Table 6 provides results from the stepwise logistic regression analysis to identify the association between the independent variables with the three IYCF practices of MDD, MMF, and MAD, using gender and migration as the two --- Infant and Young Child Feeding practices Early initiation of breastfeeding < 1 HR of delivery 58.5 52.9-63.9 325 Child was given pre-lacteal feeding 7.7 5.9-11.9 325 Exclusive breastfeeding for 6 months 94.2 91.6-96.7 325 Child was breastfed yesterday 85.2 80.9-88.9 325 Breastfeeding continued until 12-23 months 79.9 74.4-85.4 204 Age-appropriate breastfeeding 67.1 61.9-72.9 325 Child had access to diet with minimum meal frequency 54.5 48.9-59.9 325 Child had access to diet with minimum dietary diversity 7.1 4.9-10.9 325 Child had access to food with minimum acceptable diet 6.2 3.9-9.9 325 Child had iron-rich food yesterday 1.5 0.5-3.6 325 effect-modifiers. After adjustment for HEEE and other background characteristics, a male child was found to be 4.1 times more likely to get a diet with MDD and 3.8 times more likely to get a diet with MAD as compared to a female child. Similarly, a child from non-migrant household was two times more likely to get a diet with MDD and MAD than a child from a migrant household, but this association did not reach statistical significance. Interestingly, MMF was found to be significantly higher among children from a migrant household compared to a non-migrant household. Among the HEEE and other background characteristics, higher literacy status of the head of the household, increased accessibility to milk-producing animals, and increased consumption of milk/milk products by the child in the previous 24 h were found to be strongest predictors of improved MDD, MMF, and MAD. --- Discussion Gender equity in childhood nutrition is feasible only by accelerating interventions that aim to improve IYCF practices at a community level. There is growing evidence that IYCF practices are influenced by various social, economic, and cultural factors [16,[30][31][32]. Findings from an exploratory research study indicate the need to shift focus from nutrition-specific interventions to contextually-appropriate interdisciplinary solutions, incorporating environmental improvements to address the problem of child undernutrition [33]. Our study provides insights on how IYCF practices vary in the context of gender, parental migration, environmental, and contextual factors in the tribal district of Banswara, Rajasthan. Children in this area had relatively good access to immunization services and access to AWC services , and Ghatol block had access to two crops a year, yet IYCF practices were still found to be sub-optimal. Breastfeeding practices in these areas were better than the dietary practices. In terms of dietary practices, 55% of children received a diet with MMF, while only 7% received a diet with MDD and 6% received a diet with MAD. Iron-rich food consumption was almost negligible . Our study found that male children had significantly better access to a diet with MDD and MAD than the female children -4.1 times higher for MDD and nearly four times higher for MAD. Gender discrimination in IYCF practices began at infancy, with consumption of each of the seven standard food groups being higher among boys than girls. Children from non-migrant households also had better access to MDD and MAD diets compared to children from migrant households, while the reverse was true for diet with MFF, however, the differences were not statistically significant for all the three key IYCF practices. Migration for livelihood was a common phenomenon in the study population with at least one member of the family having migrated for work in the previous year in 46% of households. Children from households without parental migration had higher consumption of milk, eggs, fruit, and vegetables compared to children from households with parental migration. A child from a nonmigrant house was 1.9-2.0 times more likely to get a diet with MDD and MAD compared to a child from a migrant house, but this difference was not statistically significant. The insignificant association between migration and MDD and MAD may be because of the huge blockwise variations in migration practices , and the sample size of Kushalgarh block alone was not adequately powered enough to assess this exposure against MDD and MAD. Apart from the above two effect modifiers, the two other strongest predictors of improving complementaryfeeding practices of MDD and MAD were the presence of milk-producing animals in households and consumption of milk/milk products by children in the previous 24 h. Both these variables were significantly and independently associated with MDD and MAD separately. Children living in households with milk-producing animals had 5.6-5.8 times increased access to a diet with MDD and MAD compared to those living in households without milkproducing animals, while children who had consumed milk/milk products in the previous 24 h had 6 times higher access to MDD and MAD compared to those who did not consume milk/milk products. Overall, a large number of households had milk-producing animals, but less than 40% of children had consumed milk in the previous 24 h. Interestingly, access to animal milk at the household level did not translate to improved consumption of milk or milk products for the children. On the contrary, consumption of milk by children in households with access to animal milk was lower compared to those in households with no/poor access to animal milk . One reason for this may be that the majority of households with milk-producing animals sold the milk they got from the domestic animals at the local market to make additional income. This pattern was also observed in the project's qualitative study [28]. The other significant predictors for improving IYCF practices included literacy of the head of the household , accessibility of nutrition services at AWC , awareness about the Clean India mission , and use of a clean source of cooking fuel . --- How comparable are our results with other studies? Two-thirds of the mothers were literate, and 60% of families were aware of the Clean India Mission , statistics of this study reflecting the national average. Similarly, study findings also reflected national figures on immunization -62% of the respondents' children were fully immunized, 60% received Vitamin supplementation, and 31% were dewormed [13]. According to NFHS-4 in rural Banswara, 39% of children received breastfeeding within 1 hour of birth, 56% were exclusively breastfed, and only 0.8% children aged 6-23 months received MAD. On the other hand, in rural Rajasthan, 29% initiated breastfeeding within 1 hour; 58% exclusively breastfed; 29% of children aged 6-8 months received solid and semi-solid food; and 3.3% received MAD [13]. Most recent findings from CNNS in Rajasthan estimated that while 43.6% of children aged 6-23 months received a diet with MMF, the figures for other complementary feeding indicators were quite low -11.6% for MDD, 3.5% for MAD and 1.4% for iron-rich foods [12]. Compared to the findings of NFHS-4 and the state CNNS data for both Banswara district and Rajasthan state, the IYCF practices in the PANChSHEEEL study area show improvement , but the results are still far from optimal. Additionally, the Banswara district has a higher female-to-male child sex ratio than the state average. Despite this, gender differentials in Banswara district were quite evident on literacy and type of work, with a higher proportion of males being literate and males constituting the main workforce , while a higher number of females were marginal workers and agricultural labourers [35]. We also noticed that around one-third of children had consumed convenience foods like biscuits and tea in the previous 24 h. This was also noticed in the formative research, where it was found that when a mother or caregiver is busy with household chores, convenience foods like biscuits, tea, khichdi , dalia or small pieces of roti dipped in milk are provided to pacify a child's hunger [28]. An analysis by Fledderjohann showed that breastfeeding patterns were similar for boys and girls until about 12 months of age, when a gender gap begins to emerge. Among the firstborns, the median duration of breastfeeding was around 21 months for females and 23.2 months for males, while second-born females experienced only a slight disadvantage , highlighting the importance of both gender and birth order in IYCF practices [19]. Based on the variations of type of food consumed and IYCF practices by gender , we are unable to provide conclusive evidence that females are discriminated against on all key IYCF practices. However, it is evident that girls were at a disadvantage for most of the complementary-feeding practices compared to the boys, especially access to MDD and MAD. Other studies have reported a lack of conclusive evidence of female children being nutritionally disadvantaged [36] and shown heterogeneity in nutrient intake in different states [37]. One of the reasons for this inconsistency could be because our analysis is aggregated only by gender and not further according to birth order or wealth quintile of children, as these two factors play a key role in gender-based discrimination [37]. A deeper analysis was not possible due to sample size constraints, particularly for MDD and MAD. Findings from our qualitative formative research in the same nine villages substantiates our finding that girls are at a disadvantage on the education front, particularly those from families with parental migration. In addition, teachers of this area reported that 'elder siblings, especially girls, were often absent from school to take care of younger siblings, especially in families where both parents have migrated or if primary caregiver was unavailable'. It was also noticed that, although families do receive food supplements for infants, many mothers don't know how to prepare them, resulting in children still not getting these benefits [28]. On gender-based labour participation, women were engaged in agriculture and livestock farming throughout the year whilst men took part in agriculture only during sowing and harvesting seasons. Some women with children aged less than 2 years also participated in the National Rural Employment Guarantee Scheme. The men of Ghatol mostly relied on local wage labour due to the proximity of the block to the district headquarters and a cloth mill. Circular migration to urban areas in the adjacent states of Madhya Pradesh and Gujarat was common among men in Kushalgarh. A study conducted in rural parts of southern Rajasthan documented how environmental and contextual factors push families to economic distress, forcing young males to migrate to the neighbouring state of Gujarat for seasonal unskilled jobs [38]. In these migrant households, pressure on financial resources might have led to scarcity of nutritious food and limited the mother's time and energy that is required to provide adequate care to the young ones. However, in our study we could not find conclusive evidence of parental migration having an independent and statistically-significant effect on a child's access to key IYCF practices. --- Strengths and limitations There is limited evidence on the influence of socio-economic factors on IYCF practices [11] and insufficient understanding on the inequalities that shape malnutrition in India [30], particularly in tribal areas [39]. As far as we are aware, this is the first in-depth assessment of inequalities in IYCF practices with respect to gender and parental migration in a tribal district. The study also demonstrates how the factors affecting nutrition for 6-23-month-old children are complex, affected by elements such as gender; poverty and its associated migration; maternal health literacy; home environment; dietary practices; hygiene practices; and access to milk-producing animals. Since data for this study is derived from two divergent blocks of Banswara district and the interventions were co-developed with community stakeholders, findings of this research are applicable to both rural and tribal parts of India. One of the major limitations of the study is that the findings are based on results from only nine villages and 325 households. Also, the nine villages were selected purposively using a set of inclusion and exclusion criteria, and the eligible households were selected by using a list of lactating mothers made by the AWW/ASHA and not by conducting our own listing and mapping exercise. This approach might have resulted in missing some eligible children from these villages. However, as we adopted a threestep recruitment policy the chances of having missed many eligible households are minimal. Even though there was general improvement in IYCF practices with positive shifts in HEEE factors, most of these associations were not significant/close to significant, perhaps due to limitations of the sample size. Some of the associations between HEEE and IYCF factors have wide confidence intervals due to this limitation. Additionally, this study and other qualitative surveys were conducted during the months January-March 2017, a period of lean agricultural activity, which may have some influence on the IYCF practices. 2018), it is evident from our research and recent NFHS-5 findings that the quality of dietary and IYCF practices and the prevalence of child undernutrition is not encouraging [40]. Also, India's child nutrition issue is characterized by significant inequalities across socioeconomic groups and areas of residence and has made very limited progress in addressing these inequalities. --- Significance Although IYCF is generally understood to be shaped by household-level factors, this study emphasizes that IYCF practices are also shaped by contextual factors -especially gender. However, by promoting universal access to animal milk, by engaging with the literate heads of the households in promotion of optimal IYCF, and through effective and targeted implementation of ICDS services, existing gender inequalities in complementary-feeding practices could be minimized. Household-level factors are thus interconnected with the village and HEEE level factors. These should, therefore, be considered when planning an optimum intervention to address IYCF practices in low-and middle-income countries. The challenge of child malnutrition calls for a multidisciplinary approach that targets multiple underlying factors like PANChSHEEEL's intervention strategy [21], which adopted a multi-disciplinary, participatory, and lifecourse approach to tackle the multi-dimensional problems of childhood malnutrition and IYCF practices. --- Conclusions Based on the findings of our research and factsheets of NFHS-5, India's progress on child undernutrition is not encouraging, barring a few exceptions [40]. With eight out of ten children in India experiencing dietary shortfall and undernutrition as a consequence of several complex factors, and girls and children from parentmigrated homes receiving an inadequate diet, the nutrition agenda of India should have 'food as a right' from a holistic perspective. Though efforts have been made to provide affordable access to quality food items for vulnerable households, it is important to urgently address the issue of gender discrimination in dietary practices using integrated and transformative approaches. In addition, efforts need to be made for the provision of adequate and diverse complementary food starting from 6 months old instead of waiting until children reach school age. Such approaches will help India to meet the UN's SDG targets, working its way towards achieving zero hunger and good health & well-being . --- --- --- Additional file 1. Additional Authors' contributions ML, ML, PP, RK and SS conceived the original concept of the study and designed the research methodology. HC, SP and PP carried out the interviews, HR, RK and AD analysed the data and wrote the paper. ML, ML, LM, PP, VK, SR, LB, NS, SS, and RK validated the study and revised the manuscript critically for important intellectual content. ML, LM, PP contributed to the manuscript writing, edited the final manuscript and prepared for submission. HR had primary responsibility for the final content. All authors read and contributed to the reviewing the analysis of the data, the designing of the manuscript, and the approval of the final manuscript. --- --- --- Competing interests None. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: ---
Background: The interdisciplinary Participatory Approach for Nutrition in Children: Strengthening Health, Education, Engineering and Environment Linkages (PANChSHEEEL) study used a participatory approach to develop locallyfeasible and tailored solutions to optimise Infant and Young Child Feeding (IYCF) practices at an individual, household, community, and environmental level. This paper aims to evaluate the influence of gender; migration; and Health, Education, Engineering and Environmental (HEEE) factors on IYCF practices, with the primary outcomes being three key complementary-feeding practices of Minimum Dietary Diversity (MDD), Minimum Meal Frequency (MMF) and Minimum Acceptable Diet (MAD).A cross-sectional survey of 325 households with children aged 6-23 months was conducted in nine purposively selected villages in two blocks of Banswara district, Rajasthan, India. A survey tool was developed, translated into the local language, pre-tested, and administered in a gender-sensitive manner. Data-collection processes were standardized to ensure quality measures. Association of the primary outcome with 27 variables was tested using a Chi-square test (Mantel-Haenszel method); backward stepwise regression analysis was conducted to assess the impact of effect modifiers (gender, parental migration). Results: Half of the surveyed children were of each gender, and fathers from half of the households were found to have migrated within the previous year to search for additional income. Parental literacy ranged from 60 to 70%. More than half of the households had access to milk-producing animals. Consumption of each of the seven food groups, eggs (4.7% vs 0.7%; p < 0.02), MDD (10.5% vs 3.2%; p < 0.02) and MAD (9.4% vs 2.6%; p < 0.02) were higher for boys than for girls. After controlling for contextual factors, a male child was 4.1 times more likely to get a diet with MDD and 3.8 times more likely to get a diet with MAD. A child from a non-migrant household was 2.0-2.1 times more likely to get a diet with MDD and MAD as compared to a child from a migrant household. However, this association was not found to be statistically significant after regression. Presence of milk-producing animals in households and consumption of milk/milk products by children in the previous 24 h were the other two strong predictors of MDD and MAD,
BACKGROUND The COVID-19 is an unprecedented pandemic that has led to millions being affected and thousands dying everyday across the world. India reported its first case in the state of Kerala on 30 January 2020. 1 In a matter of few months, India has crossed the mark of 145 380-confirmed COVID-19 cases as on 26 May 2020. 1 More than a billion inhabitants, pronounced inequity, and poor healthcare networks pose a significant challenge to fighting this pandemic in this country. Within that fight, urban slums, home to 65•49 million people in India are the most challenging environments. 2 Overcrowding, low-rise, illventilated shanty buildings, narrow insanitary filthy lanes, social fragmentation and exclusion are the hallmarks of slums. 3 These factors, along with poor food security, indoor --- Strengths and limitations of this study ► To the best of our knowledge, this is the first article exploring the experiences of a healthcare team working in a large slum in India during the COVID-19 pandemic. ► It offers valuable insights into complexity, vulnerability and lessons of resilience from real-life settings that could help build evidence regarding slums and their health providers who are often missing from the existing scientific literature. ► The researchers being a part of the slum healthcare ecosystem are in a strong position to understand and interpret the context. However, the possibility of personal bias and power bias cannot be ruled out. ► Purposive selection of participants and its potential for judgement bias, use of practical objective questionnaire instead of a previously validated instrument and not exploring personality traits that play a role in stress, coping and resilience are the limitations of the study. Open access air pollution, the impossibility of social distancing and limited health infrastructure, make slums vulnerable to the COVID-19 infection and higher mortality. 4 5 As a result of the factors discussed above, slum communities are extremely vulnerable to the current pandemic, not only from the perspective of direct health risks but also from the perspective of its long-term consequences. While the need for access to good and affordable healthcare is obvious, it is exceptionally challenging to provide such care in this setting. This challenge is further complicated by stigma and social prejudice. The Indian media, sensationalised recent news about the COVID-19 outbreak in the Dharavi slum of Mumbai, portraying slum dwellers as disease spreaders who threatened the health and security of other citizens. 6 7 This has led to non-compliance with government rules and violence to COVID-19 mitigation efforts in many slums. But in the later stage of this pandemic, the efforts of the Mumbai health authorities to manage COVID-19 in Dharavi slums were applauded by the chief of the WHO as a global role model for effectively managing the spread of novel coronavirus in slums. 8 In the context of heightened 'COVID-19 anxiety' and stigmatisation, slums require supportive and responsive healthcare systems to stand up to the mounting challenges. While most of the urban primary health centres are yet to be established in the state, the existing urban health centres were forced to get involved in pandemic-related activities with a reduced focus on routine healthcare services such as non-communicable disease management and Reproductive and Child Health services. Similarly, while the tertiary care systems were preparing themselves for this pandemic, most of the private primary care clinics in the slums have closed down. 9 There are many reasons for the same. Constrained infrastructure, limited human resources, non-availability of personal protective equipment , cost escalation of care, exhaustive disinfection demands, environmental vulnerability and pre-existing social prejudices pose a significant challenge in running the primary care clinics in a low resource setting. Being plagued by poverty and extreme health vulnerability, people in slum areas also inordinately suffer from the killer diseases, including waterborne diseases, tuberculosis and NCD like diabetes and hypertension in addition to COVID-19. 10 Primary care services thereby play a crucial role in managing existing diseases while simultaneously crafting a coherent response to the novel COVID-19 pandemic. Its role in preventing disease complications , providing emergency care, offering preventive services such as antenatal care and immunisationwhile formulating a multifaceted response to this pandemic is indispensable. 11 12 Realising the exponential growth of unorganised slums in the cities, their increased health demands and the absence of any existing comprehensive health programme, National Urban Health Mission was launched in 2013. It envisages meeting the healthcare needs of the urban poor by proposing U-PHCs for every 50 000 population within or 500 metres of the slum. NUHM is set to cover the country's seven big metropolitan areas and 772 cities with a population of more than 50 000 people with an investment of INR225 billion into the healthcare sector in a phased manner. 13 Though some initiatives were rolled out as a part of NUHM, the coverage of the government health system in many of the slums in India is far from ideal. The Bangalore Baptist Hospital has rendered primary care services in slums of Bangalore through its standalone community and mobile clinics for more than a decade. Though cognizant about the health demand in the slums, we did debate over 'self-preservation' versus 'moral contract' to the society during the pandemic. The objective of this paper is to describe the initial dilemmas and consideration, mental stress, adaptive measures implemented and way in which the team collectively coped with the situation. This paper describes the experience of running healthcare services in one of the biggest slums of Bangalore during the first 40 days of the COVID-19 pandemic. These experiences may help us improve the healthcare services in slums and to better prepare for future epidemics and pandemics. --- METHODOLOGY Setting The hospital BBH is a tertiary care hospital that was started in the early seventies as a not-for-profit organisation built on Christian values. BBH exists to serve the needy irrespective of their religion, caste and socioeconomic status and also has a mixed pool of employees across different faith. The Community Health Division extends its services to 1083 villages of Bangalore Rural District and twelve slums within Bangalore city . --- The slum The United Nations operationally defines a slum as 'one or a group of individuals living under the same roof in an urban area, lacking in one or more of the following five amenities': durable housing , sufficient living area , access to improved water , access to improved sanitation facilities and secure tenure . 14 Devarajeevanahalli is one of the largest government notified slums in Bangalore, extending over 1•15 km with 50 000 population. 15 16 A study by the primary on May 7, 2024 by guest. Protected by copyright. http://bmjopen.bmj.com/ BMJ Open: first published as 10.1136/bmjopen-2020-042171 on 18 November 2020. Downloaded from author of this article reported poverty , inadequate physical living conditions , poor penetration of medical insurance and income insecurity . 16 Only one-third had a regular job with an average daily income of INR 304•2 and most worked in the unorganised sector. 16 17 --- Study design We used mixed methods research with a quantitative paradigm nested in the primary qualitative design. 18 This design helped the researchers explore diverse perspectives and uncover complex relationships in this unique social and cultural context. The research was underpinned by an interpretivist approach, which recognises the subjective nature of knowledge, the need to understand situations from the perspective of those involved and the shared meaning-making between the researcher and the participants. 19 20 --- QUAL We conducted an ethnographic study, where ethnography was applied to understand and describe a distinct experience of continuing health services in slum settings. The researchers CEG and LRI were trained and experienced in QUAL research methods. The researchers were part of the health team, and all participants were colleagues of the interviewers. In ethnography, the focus is on 'understanding how people live their lives in their sociocultural worlds', 21 and the goal is to provide an interpretive-explanatory narrative and meaning of people's behaviour including their beliefs and expectations, which are sociocultural in nature. [22][23][24] Culture in general is defined as 'shared patterns of learnt values, beliefs and experiences of a group that provide a sense of identity and guide individuals, often unconsciously, in their thoughts, actions and decision-making'. 25 However, in this study, we employed microethnography, in which culture is defined more narrowly as a subculture of healthcare professionals delivering care in slum settings during COVID-19 outbreak. 26 27 We took a descriptive, exploratory approach and employed the QUAL research techniques of participant observation, ethnographic interviewing and focus group discussions . 22 28 Participants were purposively selected based on their experience and their ability to express. During participant observation, the researcher's intention was known to the group being studied. The researchers played a dual role in observing and participating, yet the emphasis for the researchers was to collect data, rather than participating in the activity being observed. With a deep understanding of the research question, the researchers moved from descriptive observation to focused observation and finally to selective observation . 29 Ethnographic interviewing is an informal one-on-one interview taking place in a naturalistic setting, following the participant observation. 22 30 To enhance understanding and gain deeper insights, we conducted FGD, which allowed social interaction between participants using an interview guide. Researchers with methodological expertise and contextual knowledge developed the interview guide and pilot tested with two participants. The interview and FGD guide predominantly consisted of open-ended questions that probed the significance of the pandemic , perceptions about their susceptibility to infection and the severity of COVID-19, their mental stress and how they were managing them . The duration of the in-depth interviews and FGDs ranged from 30 to 45 min and 45 to 60 min, respectively. Ethnographic interviews and FGDs were audio-recorded , transcribed and analysed along with field notes. Fieldnotes with narrations of what was observed, including informal conversations, meeting notes and other documents available in the setting, were considered in the analysis to enhance the understanding of the context. The essence of the findings was discussed with the participants at multiple instances to maximise the efficiency of the field experience and minimise researcher bias, in order to make the findings as objective and truthful as possible. --- QUAN We conducted an anonymous survey using a semistructured questionnaire through virtual Google form to assess the mental stress and coping strategies objectively . The questionnaire had three sections with 21 questions: a. The demography section contained six questions regarding age, gender, type of occupation etc. b. The section on mental health contained seven questions asking about participants' stress, fear, anxiety Open access and worry on a Likert scaler in the last 40 days and five questions related to participants' mood and emotions. c. The final three open questions were on coping strategies. We used the theoretical model of stress and coping process by Folkman and Greer and Health systems Resilience framework to assess the fitness of these models in our context. 31 --- Data analysis All the QUAL data were collated and analysed using the framework approach. 15 17 The complexity of the real-life health system in a pandemic and analysis of multiple perspectives from a variety of data sources were the reasons to choose framework analysis. Using the framework method, we followed a combined flexible approach to analysis, enabling themes to develop inductively from interviews and deductively from the existing literature . The interviews and FGDs were transcribed and read through multiple times to familiarise the data and then open coded. Two researchers read a couple of interviews in each category losely, identified key themes and patterns, reviewed the data multiple times and developed codes manually in the initial stage. After researchers open coded the initial transcripts independently, we agreed on a set of codes, each with a brief definition, which formed the initial analytical framework. The researchers then independently coded all the transcripts using the initial framework, taking care to note any new codes or impressions that did not fit the existing set. The process of refining, applying and refining the analytical framework was repeated until no new codes were generated. The final framework consisted of 33 codes, clustered into 10 subcategories and 3 broad categories, each with a brief explanatory description of their meaning and examples of what ideas or elements might be summarised under that code. In the next stage, we applied the final analytical framework to each transcript manually. Once all the data had been coded using the analytical framework, we summarised the data in a matrix for each theme using Microsoft Excel. In the final interpretation stage, we reviewed the matrix and made connections within and between participant and categories. And finally, we tried to go beyond descriptions of individual cases towards developing themes that were meaningful in context and offered possible explanations for research questions. QUAN data were analysed using SPSS V.20•0 and reported as frequencies, percentages means and SD. We merged QUAN and QUAL results to compare, contrast and synthesise the results. The results, thus generated, were mapped into the theoretical model of stress and coping process to see the fitness of the model. 32 --- Data credibility The participants read through the transcripts to comment on the accuracy of the texts and corrected them if required. The researchers belonged to the healthcare team and were familiar with the cultural context of other participants and the setting. This improved the validity of the interpretations of the data. --- Role of funding source We did not receive funding for this study and the corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication --- Patient and public involvement This research was done without patient involvement. Patients were not invited to comment on the study design and were not consulted to develop patient-relevant outcomes or interpret the results. Patients were not invited to contribute to the writing or editing of this document for readability or accuracy. However, the participants of the study were involved in developing the study tool, reading the final manuscript. --- RESULTS --- QUAN Among 87 team members, 64 completed the survey with a response rate of 73•5%. The mean age group was 34•6 , and most of them belonged to 20-30 years. The majority of the participants had 1-5 years of experience with a mean of 7•4 years . In the survey, a vast majority reported no fear of coming to work during the survey period. Happiness was the predominant feeling in the last 40 days among 36% of the team members. Majority of them reported that they experienced fear at some point of time. . Distracting themselves with hobbies and spending more time with family were cited as a means of emotional regulation by the participants. A significant proportion of them reported prayer as the most critical thing that they do to overcome the unpleasant experiences related to COVID-19. --- QUAL We conducted four FGDs and 10 in-depth interviews. The results are organised into three sections: stress of the healthcare team, adaptive interventions and coping strategies. --- Experience of the healthcare team --- Stress related to corona pandemic Stress can be expected when individuals perceive high susceptibility to a potentially lethal infection. The team appraised themselves as 'highly susceptible' in terms of contracting the virus. 'Slums are dense, no way of physical distancing… the chances of getting this infection is high if we continue our services there'. They also reported contracting infection is of 'high personal significance' as it involves stigma, isolation and death. 'Most people will be okay… 80% don't even know they had an infection. Sometimes it is good to get it… but what if I am in the Open access 20%, scary, hmm.' . Risk perception of the health team was high as they rated high perceived susceptibility and severity to COVID-19 infection. 33 Our analysis revealed that health professionals experienced emotions of fear, anxiety and stigma during the pandemic. --- Fear Fear of contracting the infection Fear of infection was the primary concern for all the health team members. Severe infection and death were discussion points at all homes.'What will happen to my children if I die? They are so small…who will look after them?' a doctor aired her concern. Health vulnerability of the slum, nature of the disease and news reports escalated their woes and worries 'People live so close to each other… slum is a high-risk area, you know. In the initial phase, there won't be any symptoms'. 'Infact, yesterday I thought of myself on a ventilator'. Reluctant disclosure of high-risk history by patients eroded trust and sometimes led to patient phobias.'Yesterday, I saw a patient and towards the end of the consultation, she told about her travel to Mecca a week back. I am worried that I may get Corona'. --- Fear of violence in slums A mandatory lockdown imposed by the government to curb the spread of COVID-19 created an existential anxiety in the slums. Despite strict law enforcement, slum dwellers indulged in minimal economic activity leading to clashes and violence, which the healthcare workers were worried they would become victims of 'Yesterday police beat people for opening the shops…they ran through the clinic. What all will happen, nobody can predict. We are stuck in the middle of all… can be dangerous'. Another concern was the resistance of the population to the changes like wearing masks, entering the clinic one by one and following cough etiquettes 'People do not have wages, and they are very stressed. We are cautious in the clinic, patients come without masks, but it is difficult to enforce, they turn violent sometimes…'. --- Guilt Another primary concern was the thought of passing the infection to family members, especially the elderly and children. While healthcare workers often accept increased risk of the disease as a part of their chosen profession, they exhibited concern about family transmission, especially the elderly, immune compromised or those who have chronic medical conditions. 'I may give the infection to mother in law or my 8 year old… if something happens to them, how will I forgive myself? Isn't it terrible?' 'They dread the possibility of passing the infection to the vulnerable in their homes'. --- Stigma To aid the containment of infection, government officials traced the infective person to their homes and ink their Open access palms.'Getting a seal…it is terrible. The whole locality will tell us to get out from where we live'.There is social monitoring of the 'infected and their family members'. Based on the number of cases, an area is classified into different coloured zones, and in red zone, there is a substantialrestriction of movement, transport and commercial activity. This has led to a widespread stigmatisation.'The other day, my house owner called and asked,'Why can't you resign your job as a nurse? It is putting your family and our whole neighbourhood into risk''.The people in the locality saw health workers as carriers of infection and avoided them as a measure of self-protection. --- Exhaustion and burn out The emotional toll of working in dangerous conditions filled with uncertainties drained health professionals. Attending to patients fully clad in PPEs also exhausted them physically. 'We are drenched in sweat, we take turns, it is difficult to see patients more than 4 hours at a stretch wearing this' Ever since I started wearing masks at work, I wake up at odd hours…hmm I feel sleep deprived and tired during the day…not sure why?…' They also had intense household demands due to lockdown. 'After reaching home from the clinic, I have to wash my clinic clothes, take bath, cook and do all household chores. No maid because of lockdown, it is so tiring'. --- Stress associated with the change in societal norms Chatting with neighbours, social gatherings, visiting the sick, festivals, and social functions were everyday affairs in Indian culture. Physical distancing, rule of 'no-touch or hugs,'absence of social functions and gatherings have created enormous stress on the individual, communities and society as a whole. 'No prayer meetings, no visitors. When will all this end?'. 'I can't even kiss my child so stressful'. 'My marriage scheduled in May is postponed now… when will I get married?'. These were the voices of healthcare staff who shared their concerns due to new norms. --- Adaptive interventions De-centralised, participatory and practical measures were instituted to improve the safety. --- Realignment of slum health service The existing infrastructure was reallocated into waiting and designated hand wash area. The consultation rooms were shifted to open space to ensure adequate ventilation. Segregated patient flow was instituted using visual triage for the screening of middle east respiratory syndrome and isolated fever clinic. 34 Remote consultation opportunities like teleconsultation, medicine drop-off at homes or proxy medicine pick-ups and prescribing chronic medications for a longer duration were encouraged. The universal use of barrier precautions was implemented considering the threat of undiagnosed but infected patients. Reusable waterresistant PPE was designed and rolled out, anticipating the shortage of supplies and the cost escalation. Another critical aspect was the inclusiveness of the 'healthcare' and the 'support team in the preparedness plan. All were trained in handling patients if needed and given full PPE in the initial phase. Two times per day disinfection using 70% ethanol or 1% sodium hypochlorite of all common areas and frequently touched surfaces was ensured . --- Transparent communication One of the first initiatives was to establish a communication routine to foster collective action rooted in trust. This gave an avenue to openly and proactively share relevant and reliable information to all in a timely and Open access digestible manner. Knowledge was shared from credible sources, empowered people to look into reliable sources for updates, discussed the 'hype and misinformation' in media and the impact of COVID-19 on the world, nation, state, on the institution and themselves. Empathetic discussions acknowledging vulnerabilities and accepting uncertainties were a hallmark of these discussions. Since the team was diverse , these active dialogues fostered solidarity through collaborative learning and peer support. Periodic communications with faith-based values were also an integral part of COVID-19 response initiatives. --- Coping strategies Healthcare professionals used multiple coping strategies to tide over the threat associated with COVID-19 . They used emotion-focused coping with regulating their emotions by being aware of their negative mental representation, avoidance of negative rumination, cognitive reappraisal and positive reframing. Healthcare professionals used problem-focused coping with initiating actions reducing the risk of infection. Though these coping strategies resulted in positive emotions, they sometimes struggled with the non-resolvable facet of the pandemic. They used meaning-focused coping with deriving meaning from the stressful experience to sustain well-being despite difficult times . --- Emotion-focused coping Healthcare professionals reported maladaptive coping like denial , wishful thinking, escape avoidance and fatalism Religious helping : God is using me to be a source of comfort for many…I feel good about it. We are able to reflect many things together…we are growing together and our relationships have become more meaningful'. Religious forgiving: I am able to tide over my anger and helplessness when I reflect in prayer…it gives me peace even when things are going out of control Religious coping Open access do anything') in the initial weeks. Later, they cognitively reappraised the situation in positive ways, reframing the opportunity to work in a hospital as a venue for fostering social connections, earning opportunity and improved access to medical care. They achieved change in their emotional state through information gathering, distancing from media hype and seeking social support. Health professionals resorted to credible information seeking and practised distancing from media hype to insulate themselves from paralysing fear. 'I stopped watching the news at home; children get frightened. I tell at home what I hear from the hospital, that is more correct'. Peer support acted as a source of encouragement and companionship, which brought an array of positive emotions to their life. They prayed and fellowshipped together before starting work. 'I am happy, I have a good workplace…, we eat and pray together', said one of the nurses. They looked after each other, watched how they were managing patients and reminded each other about infection control. They listened to each other and comforted when needed. --- Problem-focused coping Being cognizant about the vulnerability of slum, the team chalked out plans to mitigate the risks and implemented it at the earliest. Motivation to be relevant to the poor and determination to sustain the needed services activated their 'problem-solving attributes'. They responded by realignment of slum services, universal use of PPE, effective disinfection methods and other health worker and patient safety measures. High response efficacy and selfefficacy were reported for these new initiatives. Though hot weather, difficulty in washing, loss of personal touch in the consultation were pointed out as the response cost , the response efficacy outweighed the response costs. Meaning-focused coping People used meaning-focused coping like spirituality, goal pursuit and value adherence to deal with non-resolvable facet of the pandemic. Belief in God was reported as the core factor that allowed the healthcare team to cope with their painful experiences. Being founded on the principles of Christian faith, religious beliefs, goals and values were congruent in the institution. Values of compassion, sacrifice, service and reaching out to the poor were held with high esteem at all levels. Healthcare professionals, though belonging to diverse faith imbibed and assimilated these values. 'Healing and wholeness in the spirit of Jesus Christ is our vision. Selflessness is a part of it… we are called to serve in the spirit of selflessness'-one of the nurses said during the interview. On a personal level, people used religious coping to find meaning, gain comfort and closeness to God and to achieve a means of life transformation. 'I seek God's help,' 'I put my trust in God,' 'I find comfort and peace in the midst of the pandemic,' 'God has a purpose for everything'-these were their responses during interviews. Many have found a new purpose to their lives, to be a source of comfort to others at this turbulent time. Periodic communications with faith-based values were an integral part of COVID-19 response initiatives. Nonmandatory activities like morning prayers and weekly sessions of spiritual reflection of events were cited by the team members as essential resources in building their resilience. People also expressed a strong sense of duty to serve poor communities during this crisis. 'What will coolie workers do?' 'I have to help poor', 'COVID-19 is very bad for the poor', 'I distributed rice to the poor'-the list goes on. Values of altruism and empathy towards the poor outweighed negative emotions like fear and anxiety. They considered it as an opportunity to make 'good use' of their lives. 'One day all have to die; the most important thing is to use our lives for doing good, at this time, we are helping others, that is good'. Assigning a purpose to the pandemic helped them to cope with the pandemic situation risking their own lives. --- Resource challenges, acceptance challenges and sustenance challenges Continuing patient care in a poorly constructed, illventilated clinic in a crowded slum with limited knowledge to evaluate team risk and community risk caused enormous strain on the healthcare team. Though most of the patient care activities were shifted outdoors, patient registration, laboratory and dental procedures were held in rooms with compromised ventilation. Patient triaging resulted in long queues and prolonged waiting time. Rains disrupted outdoor consultations. Disinfecting PPEs in a space-constrained slum clinic was another herculean challenge. The lockdown further strained the system by a shortage of essential supplies . Mounting operating costs Open access and funding crunch were added to their challenges. Human resource constraints weighed heavy on the team, when vulnerable staff were relived from risky duties. The existing staff juggled their time between designing and operationalising new interventions and providing care. Internalising new norms at a rapid phase, insulating team from response fatigue and sustaining motivation for a longer period are considered as serious challenges. People in slums reluctantly used the clinic fearingforced eviction if COVID-19 infection was detected. It pushed the health team to define and refine their role and relevance in this community during the pandemic. In short, facility constraints, personnel limitations, operational difficulties, mounting operational costs, funding crunch and community fears created hard hurdles for the health team. --- DISCUSSION Contextual vulnerability makes the slums a nearly impossible environment for health professionals to function during COVID-19 pandemic. Experiences of providing services to one of the most vulnerable populations in this historical time of crisis can offer valuable lessons from real-life settings. To the best of our knowledge, this is the first article exploring the stress, coping styles and the resilience of a healthcare team in sustaining medical services in one of the biggest slums in India, during this pandemic. One of the initial findings was the conflict between self-preservation and the moral contract to society. Selfpreservation-the right to life is a fundamental right of every individual. It is one of the essential attributes of medical professionals, necessary to mitigate the risk of their hazardous work environment. In normal circumstances, the healthcare professional can look after the patients without compromising much of their safety. However, in the case of life-threatening pandemics like COVID-19, there is an undue demand on health professionals to provide care to the patients, at times putting their lives at risk. Even with the best protective measures, the risk of potentially lethal infection remains high among healthcare professionals. Though the declaration of Geneva states, 'As a member of the medical profession, I solemnly pledge my life to the service of humanity, the health and well-being of my patient will be my first consideration'-it is difficult or sometimes unethical to demand 'a life sacrifice' from medical professionals on a routine basis. 35 However, in crisis conditions, these sacrifices become essential to manage and control pandemics. Similar ideological conflicts have been reported in cases of the Ebola and SARS outbreak. 36 37 Since continuity of services in slums during COVID-19 increases the likelihood of this potentially fatal infection, healthcare workers experienced the fear of death, guilt of transmitting the infection to their loved ones and stigma from community. Frontline healthcare workers had similar experiences during the pandemics. [36][37][38][39][40] In our study, healthcare professionals reported the stress of new norms, especially that of social distancing. Social scientists from different parts of the world reported that social distancing could suppress evolutionarily hardwired impulses for connection, leading to stress. 41 The consequences of isolation could be more pronounced in eastern societies due to their collectivistic nature than in western individualistic societies. Adaptive interventions of the health team were the contextualised practical adaptations of the Centre for Disease Control contingency and crisis strategies preparing for the worst-case scenarios. 42 43 The ability of the healthcare team to appraise the situation swiftly, leverage collective expertise and deliver practical and costeffective solutions reduced anxiety and promoted a sense of self-efficacy. Ensuring health worker safety as a critical responsibility of a moral institution, the hospital designed reusable water-impervious PPE with the provision of additional disposable layers to ensure safety without huge cost implications. Universalisation of PPE reduced anxiety across all members. Since the support staff may be required to do activities of risk , they were given PPE and trained at the same time as the clinical staff. Pronounced anxiety among support staff and reluctance to come for working in the initial stages vanished after rolling out PPE to them. --- Ongoing dialogic communication and prompt collective adaptive interventions fuelled continuity of services. A supportive work environment with a high level of trust and professional cohesion helped health professionals to cope with challenging conditions that were in concordance during the Ebola outbreak. 37 A high level of trust among each other and on the leadership facilitated the rapid implementation of radical collective solutions. Honest communication, empowerment and acknowledging one's own finitude of the situation were identified as critical leadership strategies during a crisis. 44 An internalised value system like religion was one of the powerful coping mechanisms people used to deal with this pandemic. Religion interprets events and assigns meaning and purpose to these unprecedented shocks. It serves as a powerful medium to gain control, peace and comfort in coping with uncertainties. 45 Studies reported that people resort to religious coping in pandemics, lifethreatening illnesses and to deal with post-traumatic stress. [46][47][48][49] Health system resilience can be defined as the capacity of health actors, institutions and populations to effectively respond to crises, maintain core functions when a crisis hits and informed by lessons learnt during the crisis, reorganise if conditions require it. 50 In this study, we have elaborated Open access on health workforce resilience, which is an integral part of health system resilience. Resilience dividend in this system can be attributed to the team's commitment to their mission and values, vigorous public health response to the shock, the strength of social capital, collective learning, the sequential approach of problem-solving and coping strategies and an organisational culture promoting individual and collective resilience. 51 These qualities were not developed as a response to the crisis but were a part of normal functioning: crisis only amplified the expression and made resilience more visible. The sheer speed, scale and catastrophic consequences of the COVID-19 in slums are a jolting wake-up call for everyone. For people in slums, the virus containment measures is often a choice between the risk of catching the disease and the certainty of hunger. Hence, their choice of infection and subsequently being super spreaders cannot be condemned. The systemic neglect of the slums has made panic, unrest and massive fatalities a real possibility during the pandemic. It is the responsibility of the government to include slums along with its cities, as they owe much to this informal economy. Slums in Dharavi, India, for example, employ as many as 70% of its residents with current economic output estimates of US$700 million annually. 52 53 Other places such as Delhi, Bangalore and Kolkata also depend heavily on the informal sector for supporting their economy. 54 Hence public health measures mandating integrated slum development have to be accelerated during the post-pandemic period to prevent such future scenarios. The article brought out a theoretical framework for cognitive appraisal and coping using the theoretical model of stress appraisal and the coping process by Folkman and Greer and discussed the foundations of health workforce resilience, these are highly context driven and have to be generalised cautiously. --- Health care provision experiences and learnings At the microlevel, pandemic like COVID-19 will challenge any health team's commitment, cohesion and agility and more so in limited resource settings. Even in the midst of crisis, the team can succeed by reflecting on their vision, cultivating a deep mindset of humility, collectivism, accountability and nurturing bottom-up all-inclusive culture. While we were able to sustain the healthcare services in slums during this pandemic with considerable risks to our workforce, had the mortality been greater, we could have lost precious healthcare force, a loss for all and eventually a grave challenge to sustainability. Given the fact that COVID-19 will not be the last public health threat to the world, the learnings through this crisis should be wake up call for systemic changes in urban governance and city planning. At the macrolevel, the pandemic brings a strong case for improvement of basic amenities in slums, as it strengthens public health and mitigates health threat to the people in slums and cities. Improved public spaces and economic inclusion of slum dwellers in the organised labour sectors would pay colossal health and economic dividends. Linking science to society and promoting community participation with strong health networks is essential to have an effective response in any pandemic. 16 In short, a resilient health system cannot thrive on its own for a long time, unless cities as a whole become more inclusive and resilient. --- CONCLUSION The study describes the experience of sustaining essential health services in one of the biggest slums in India during the COVID-19 pandemic. It throws light on the complexities of the context, struggles, adaptability and resilience of a real-life health system in crisis. Fear, guilt, isolation and exhaustion posed considerable stress to the health team in the initial phase. However, with cognitive reappraisal, the health team managed distress using emotionfocused coping, handled the problem causing the distress with problem-focused coping and sustained positive well-being through meaning-focused coping. Organisational culture, shared purpose, adaptability, collaborative learning and meaningful relationships fostered resilience amid crisis. These values were not just crisis specific but were organically built as a part of our system. Hence, this article shows the importance of engraining a culture of resilience as a part of every health system, which then will reap lasting rewards in times of crisis. This pandemic has taught many valuable lessons. The negation of the reality of slums in cities-and, hence the rights of slums dwellers which in turn allows intolerable living conditions and weak health networks have the potential to wipe out millions in a short span of time. COVID-19 is a wake-up call to include 'slum health' within 'universal healthcare' to insulate the world from fatalities of future pandemics. --- Contributors CEG contributed to the conception and design of work, acquisition, analysis and interpretation of data, and was the primary contributor to the draft paper and revisions. CEG and LRI conducted the interviews and performed qualitative analysis. LRI developed the study tool, analysed and interpreted the data, contributed to manuscript writing and critically reviewed the manuscript. SR and LDW contributed to the design of the study, interpretation of qualitative data and critical revision of the paper. All authors revised the work for important intellectual content and agree to be accountable for all aspects of the work. All authors read and approved the final manuscript. Competing interests None declared. Patient consent for publication Not required. --- Ethics approval The study was approved by the Ethics Committee of BBH. Informed consent was taken from the participants before FGDS, in-depth interviews and the survey. When we used quotes from meeting notes or discussions, we have obtained permission before using it in the article. Provenance and peer review Not commissioned; externally peer reviewed. --- Data availability statement The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Objectives To describe the initial dilemmas, mental stress, adaptive measures implemented and how the healthcare team collectively coped while providing healthcare services in a large slum in India, during the COVID-19 pandemic.
Introduction Since the outbreak of the Corona Virus Disease , the virus has spread rapidly at home and abroad. Since the attributed start of the event, the novel coronavirus pneumonia pandemic has continued to develop since 23 January 2020, and more than 300,000 new cases are added every day all around world. Regarding the space attribute of the event, the infection has spread to 188 countries and regions around the world, causing nearly 1.6 million deaths worldwide. There have been more than 90,000 confirmed cases in China, and all provinces across the country have been affected, up to December 2020. Since the outbreak, governments all over the world are doing their best to deal with the epidemic, save lives, reduce the impact of the epidemic on the economic development and social governance of their countries. In order to ensure the economic operation of various countries, many countries have launched online office modes. Local public officials in China responded to the call of the Party Central Committee to immediately return to their job positions while the masses are consciously isolated at home. Enterprise employees worked online to reduce the impact of the Corona Virus Disease on family income. In a survey of 4000 teleworkers on Flex Jobs , 95% said their productivity was the same or higher than before the outbreak of COVID-19. Therefore, this research attempts to explores the emotional exhaustion and workplace deviance of employees of different ages, educational backgrounds, and income levels affected by the Corona Virus Disease event, and provides certain countermeasures and suggestions for enterprises and institutions to deal with major emergencies, understand the psychological needs of employees, and appease employees' emotions to ensure normal operations of the company. There are many reasons influencing employees' workplace deviance. Firstly, the existing research has verified that employees' personality, leadership style, team atmosphere and other factors have a profound impact on employees' work emotions. However, few studies have focused on the impact of external events on employees' workplace behavior. Different from large-scale public health events in the past, the novel coronavirus pneumonia event is a major passive event. Based on the strength attribute of the event, the public's perception of the COVID-19 event directly affects their behavior [1]. Secondly, according to the event system theory, negative events are positively correlated with negative emotions [2]. Negative emotions can cause symptoms such as physical fatigue, insomnia, headaches, etc. [3], which can lead to feelings of helplessness, anxiety, and social decline, job unsatisfaction, breaking organizational norms and other behaviors [4,5]. This major public health event disrupted the previous lifestyle of the public, changed the previous office style of employees, and caused emotional exhaustion. Therefore, this article explores the feelings of employees of different genders, ages, educational backgrounds and income levels about the COVID-19 event, and the mechanism influencing employees' perceptions of novelty, disruption, criticality of the Corona Virus Disease event with their possible destructive or constructive deviant behaviors, which should help companies understand employees' psychological needs and practical difficulties, reduce job burnout and stimulate work enthusiasm. The contributions of this study lie in: firstly, the study of workplace deviant behavior in this paper is based on event system theory, and taking the Corona Virus Disease which has broken out globally as research back-ground, it expands the application scope of event system theory, and enriches the empirical research of the theory. Secondly, taking the external events as the breakthrough point, this article discusses the impact of external major events on employees' psychology and behavior. Thirdly, this paper attempts to break through the existing research on workplace deviance, and explores the compatibility of two types of deviant behavior. In other words, employees who implement destructive deviant behavior may also display behavior beneficial to the organization, and vice versa. --- Theoretical Basis and Research Hypotheses --- Event System Theory The previous literature has explored the relationship among stable features within entities from the macro and micro perspectives, ignoring the dynamic impact of events on the entity. Event system theory is the study of individual behaviors under the combined effects of event strength, space and time [1]. The novel coronavirus pneumonia event affects people's work and life based on the three attributes of event time, space, and strength. In terms of time attribute of event, the more it affects the development of the individual and the longer the duration, the greater impact on the individual. In terms of the space attributes of the event, it includes four dimensions: origin, vertical spread range, horizontal spread range, and distance to individuals. The novel coronavirus pneumonia event covers various cities in China, with a large spread and a wide range of spread. Therefore, the time and space attributes of the event are relatively stable, and there is little difference between individuals in the same city, however, the attributes of event strength are different. Individuals have different perceptions of the novelty, disruption and criticality of the event, which greatly affects their behavior in response to the epidemic. Specifically, the novelty of event refers to how the event is different from previous events, and the more the novelty, the more it arouses individuals' attention, changing their behavior. The criticality of an event refers to the degree of influence on the goals of the enterprise and organization, and the more criticality of an event, the more the individual needs to pay attention to the development process of the incident, and more actively mobilize resources to deal with it. The disruption of event refers to the degree of change and disturbance to the individuals' past life and habitual coping styles. The higher the disruption, the more individuals need to adjust their existing behavioral patterns [1]. Since the outbreak of the Corona Virus Disease, people take the initiative to isolate themselves at home in China and even around the world, the performance of most companies has been greatly impaired, and the economic situation has declined significantly. Employees' perception of the strength of novel coronavirus pneumonia event also has a great impact on their mental state and work efficiency, but the impact of different structural dimensions of the strength stimulus on individuals remains to be investigated. Therefore, this study is based on the event system theory to quantitatively analyze employees' perceptions of the strength of the Corona Virus Disease. This paper intends to explore the impact of event strength on the mental state of employees of different genders, ages, educational backgrounds and income levels, and the mechanism influencing the implementation of deviant workplace behaviors. --- The Event Strength Has Positive Effect on Emotional Exhaustion Emotional exhaustion is the core dimension of job burnout. It is defined as the excessive consumption of emotional resources of employees, leading to depression, haggardness and depression [4]. Research on emotional exhaustion has found that when roles are overloaded and internal resources are insufficient, employees are prone to burnout. In the workplace, male managers have the highest level of emotional exhaustion, followed by female managers. Female employees are more likely to suffer emotional exhaustion than male employees, and young employees are more likely to suffer emotional exhaustion that older ones. Single employees have higher job burnout at work [4]. Long-term emotional exhaustion can cause individual physical exhaustion, insomnia, headaches and other symptoms [6], which can cause feelings of helplessness, despair, anxiety, and separation from the group. The global economy has been hindered since the outbreak of novel coronavirus pneumonia [7], and during the epidemic, 22.3% of companies reduced their operating pressures by cutting staff and salaries, and 15.8% even halted work completely, which had a major impact on the psychological state of employees [8]. Due to opaque information and excessive negative public opinion, employees have a negative attitude towards personal employment prospects, business management level, and macroeconomic situation [9]. The perceived strength of novel coronavirus pneumonia event has a profound impact on the physical and mental health of employees [10]. According to the event system theory, the event strength includes novelty, disruption, criticality, and strength stimuli of different latitudes affect the individual's mental state. Therefore, we pose the following hypotheses: H1a. The event novelty has a positive effect on emotional exhaustion. --- H1b. The event criticality has a positive effect on emotional exhaustion. --- H1c. The event disruption has a positive effect on emotional exhaustion. --- The Mediating Effect of Emotional Exhaustion between Event Strength and Deviant Behavior Deviant behavior is defined as an employee's deliberate violation of organizational norms in the workplace based on selfish or altruistic motives. It is a conscious and purposeful subjective behavior that has a negative/positive effect on organizational performance and organizational members. Specifically, constructive deviant behaviors refer to behaviors that employees actively take to violate organizational norms in order to enhance the well-being of the organization or its members [5]. Destructive deviant behavior refers to the behavior of internal employees who deliberately violate organizational norms on other members of the organization. Such behavior will cause damage to the interests of other colleagues in the organization and even the entire collective [11]. Galperin divides constructive deviant behavior into three dimensions: Innovative constructive deviant behavior refers to the behavior of helping the organization in an innovative and non-traditional way, including five items "develop new ways to solve problems" etc. [5]; Challenging constructive deviant behaviors refer to employee behaviors that break or openly challenge established norms in order to help the organization, and includes six items such as "disturbing or breaking the rules in order to complete the work" etc.; Interpersonal constructive deviant behaviors refer to employees' deviant behaviors against other members of the organization, which includes five items such as "disagreeing with the opinions of others in the working group in order to improve existing work procedures", etc.. Past literature has found that employees with outgoing personality [12] and higher income levels have a higher perception of fairness, which encourages them to implement constructive deviance [13]. Robinson developed a scale of destructive deviant behaviors with 19 items [14], including seven items on interpersonal deviance such as "being rude to others, forming gangs", etc. [15], and 12 items on organizational deviance such as "arrive late, leave early, resign", etc. Studies have shown that employees with violent tendencies, low levels of education, and shorter years of work experience show disregard for organizational rules [16], and reduced compensation can lead to destructive deviant behaviors by employees. Under the global prevalence of the novel coronavirus pneumonia, the general public has consciously self-quarantined, but employees will inevitably touch unknown people during commuting and work, worrying about their own health, and be unable to balance work and health, causing them to suffer emotional exhaustion. Emotionally exhausted employees find it difficult to provide efficient work performance. Negative work attitudes will cause low work completion, and workers will be late, leave early, complain about leadership, and even resign [17]. When the individual's emotional resources are exhausted, the employee's creativity and challenge ability decrease [18]. When employees are highly motivated in their work, they will implement some behaviors in order to improve the existing work procedures, such as making suggestions and negating the opinions of other members of the working group [5]. This shows that employees with high emotional exhaustion are more sensitive to internal and external stimuli in the organization, and the surge in emotional exhaustion induces deviant behaviors in the workplace. Therefore, we pose the following hypotheses: H2a. Emotional exhaustion has a mediating effect between event strength and constructive deviant behavior. H2b. Emotional exhaustion has a mediating effect between event strength and destructive deviant behavior. --- The Emotional Exhaustion Has Positive Effect on Deviant Behavior Positive emotions of employees are significantly related to constructive deviant behaviors in the workplace [19]. Employees actively breaking the previous rules to improve work efficiency is a manifestation of organizational health [20], When employees' happiness is higher, their creativity is stronger, their innovation performance is higher and they are less likely to resign. Constructive deviant behaviors of employees in the workplace are crucial to the survival and development of the organization [21]. In the context of the novel coronavirus pneumonia, employees have suffered a huge psychological impact. Negative emotions such as depression and anxiety have reduced their work input and production efficiency, which has had a negative impact on the development of companies [22]. When employees are in a state of emotional exhaustion, it is difficult to ensure that their behavior can meet organizational expectations and system requirements. The accumulation of negative emotions, such as depression, decadence, fatigue, fear, and tension, will lead to decreased work motivation [23], or even absenteeism without reason, passive laziness, complaining, shirking responsibility and other behaviors [24]. Therefore, we propose that: H3a. Emotional exhaustion has a negative effect on innovative constructive deviance. H3b. Emotional exhaustion has a negative effect on challenging constructive deviance. --- H3c. Emotional exhaustion has a negative effect on interpersonal constructive deviance. H3d. Emotional exhaustion has a positive effect on organizational destructive deviance. H3e. Emotional exhaustion has a positive effect on interpersonal destructive deviance. Based on the above analysis, this paper constructs a conceptual model of the influence mechanism of event strength on deviant behaviors, as shown in Figure 1: --- Research Design --- Research Sample Affected by the epidemic, this study used online questionnaires to collect and obtain data. The survey objects are employees working in the company. The survey content includes event strength, emotional exhaustion, constructive deviant behavior, destructive deviant behavior, and basic personal information, including gender, age, education, income, industry, etc. In order to ensure the validity, authenticity and reliability of the information obtained in the research, this research has adopted a number of control measures to strictly control all links in the research process. First of all, the survey participants were informed about the academic purpose of the survey in the initial guidance of the questionnaire, and promised that all materials will be used only for academic research, and the content of the answers will be strictly anonymous and confidential, thereby eliminating the concerns of the survey participants; Secondly, this survey used the questionnaire star and the platform of the Marketing Research Office of Peking University to collect data, and adopted the "snowball" method to collect questionnaires. "Snowball" means that the researchers contacted the staff of institutions, state-owned enterprises, and private enterprises in the Tianjin-Beijing-Hebei region, asking them to fill in the questionnaire, and then send it to their friends or other colleagues in their organization to participate in the survey; Finally, setting the answering time, controlling each item to be no less than 3 s, and counting the time it takes to answer the entire questionnaire, and eliminate the questionnaires that are not filled in carefully. A total of 700 questionnaires were returned in this survey. After excluding invalid questionnaires due to factors such as too short answering times, incomplete filling, and continuous answering with the same number, 628 valid questionnaires were obtained, for an effective response rate of 89.71%. The descriptive statistical information is shown in Table 1. --- Measuring Tools The questionnaire design part of this study mainly includes five aspects: the event strength scale designed by Morgeson [1]; the emotional exhaustion scale designed by Maslach [4]; the constructive deviant behavior scale designed by Galperin [5]; the destructive deviant behavior scale designed by Robinson and Bennett [14], shown in Table 2, and the basic demographic information of the respondents. Excluding the basic information, all the questionnaires in this study use Likert's 7-point method. Interviewed employees need to score 1-7 on all the question items in the event strength scale and emotional exhaustion scale, 1 = "completely disagree", 4 = "neutral", 7 = "completely agree". In the destructive deviant behavior scale and the constructive deviant behavior scale, the respondent was asked to score 1-7, 1 = "completely inconsistent", 4 = "fair", 7 = "completely consistent". --- Scales Dimensions Items Event Strength Event Novelty There is a clear, known way to respond to this event There is an understandable sequence of steps that can be followed in responding to this event Can rely on established procedures and practices in responding to the events Had rules, procedures, or guidelines to follow when this event occurred --- Event Criticality This event is critical for the long-term success of my company Dealing with emergencies is the primary event of my company Dealing with emergencies is an important event of my company --- Event Disruption This emergency destroyed the original work capacity of my company, making the work unable to be completed. This emergency made our company stop to think about how to deal with it. This emergency has changed our company's usual response to emergencies. The occurrence of this emergency needs our company to change the previous working mode. --- Scales Dimensions Items Disobeyed your supervisor's instructions to perform more efficiently Reported a wrong-doing to another person in your company to bring about a positive organizational change --- Result Analysis --- Homologous Deviation Test In order to avoid the common method deviation from affecting the research results, SPSS 22.0 was used to perform Harman's Single factor test. It was found that the variance explanation rate of the first factor separated out without rotation was 42.546%, which did not reach 50%. Therefore, the sample did not see common method deviation. --- Reliability and Validity Test The reliability and validity of the questionnaire were tested, and SPSS 17.0 was used to calculate the Cronbach's α of each scale to measure the reliability of the scale. The results found that the Cronbach's α of event strength scale, emotional exhaustion scale, constructive deviant behavior scale, and destructive deviant behavior scale were all above 0.8, which met the reliability standard, indicating that the questionnaire had good internal consistency. Through the Bartlett test, KMO > 0.8, and p < 0.01, indicating that the questionnaire has good structural validity. --- Factor Analysis The maximum variation method was used to rotate the factor load test, select the factors with characteristic root > 1, and perform exploratory factor analysis on the event strength scale, emotional exhaustion scale, constructive deviant behavior scale, and destructive deviant behavior scale. The event strength scale extracts three public factors, the emotional exhaustion scale extracts one public factor, the constructive deviant behavior scale extracts three public factors, and the destructive deviant behavior scale extracts two public factors, and the total interpretation degree of each scale extraction factor is much higher than 50%. Therefore, it is judged that the factors selected by each scale are representative and can explain the overall variables well. In order to further test the models derived from exploratory factor analysis, this study further confirms factor analysis to compare the fit of competing models. AMOS 17.0 was used to test the discriminative validity between the factors of the model, and compare the nine factors conducted confirmatory factor analysis and found that the fitting index of the nine-factor model was significantly better than other competitive models. Each factor CR was >0.7 and AVE > 0.5, indicating that the questionnaire has good convergence validity. --- Correlation Analysis In order to avoid the collinearity problem of variables, the correlation coefficient between variables is tested first, and the mean and standard deviation of event strength, emotional exhaustion, constructive deviant behavior, and destructive deviant behavior are calculated. To judge the correlation between the variables, the correlation coefficient |r| tends to 1, the more relevant, the closer to 0, the less relevant. See Table 3 for details. It can be seen from Table 3 that there is no collinearity problem among the variables, so the following structural equation model test can be carried out to further explore the relationship between the variables. --- Analysis on the Difference of Demographic Variables The results of the differences of demographic variables showed that, there are significant differences in perception of event strength between female and male employees . Among the respondents, there were 339 female employees and 289 male employees, and female employees were more sensitive to the perceived event strength of the novel coronavirus pneumonia than male employees ; There are significant difference in emotional exhaustion among employees of different age. Among the respondents, 254 employees were under 30 years old, 138 employees were 30-40 years old, 96 employees were 40-50 years old, and 140 employees were over 50 years old. Employees of different ages have different degrees of emotional exhaustion in the face of the epidemic , employees aged 30-40 , aged under 30 , aged 40-50 , aged over 50 , Therefore, employees aged 30-40 were most affected by the epidemic events, there are significant differences in emotional exhaustion among employees with different education levels . The interviewees had 161 college, 312 bachelors, 95 masters, and 60 doctoral degrees. Employees with a master's degree or above are more affected by the novel coronavirus pneumonia than employees with a bachelor's degree or below There are differences in emotional exhaustion among employees with different family income . Among the respondents, employees with monthly incomes of more than 10,000 yuan emotionally fluctuated greatly due to the epidemic, followed by employees with monthly incomes of 1-2 K . Affected by the epidemic situation, the employees with the lowest emotional fluctuation were those with monthly incomes of 3-5 K and 6-10 K . Other demographic variables were not significant. --- Hypothesis Testing 4.6.1. Testing the Effect of Event Strength on Emotional Exhaustion From Table 4 and Figure 2, it can be seen that the standardized path coefficient of event novelty on emotional exhaustion is 0.524, p < 0.001, which has a significant positive effect, and H1a is valid. The standardized path coefficient of event criticality on emotional exhaustion is 0.574, p < 0.001, which has a significant positive effect, so H1b is valid; The standardized path coefficient of event disruption on emotional exhaustion is 0.593, p < 0.001, which has a significant positive effect, so H1c is valid. --- Testing the Mediating Effect It can be seen from Table 5 that the standardized path coefficient of event strength to emotional exhaustion is 0.624, p < 0.001; The standardized path coefficient of emotional exhaustion to constructive deviant behavior is 0.205, p = 0.001; The standardized path coefficient of emotional exhaustion to destructive deviant behavior is 0.139, p = 0.019; The normalized path coefficient of event strength to constructive deviant behavior is 0.435, p < 0.001; The normalized path coefficient of event strength to destructive deviant behavior is 0.512, p < 0.001. According to the mediation test method, we should first analyze the influence of independent variable event strength on dependent variable deviant behavior. If the relationship between the dependent variable: constructive deviant behavior, destructive deviant behavior and the independent variable: event strength is not significant, we stop testing the mediating effect. Secondly, explore whether the independent variable affects the mediating variable . If this relationship is not significant, stop testing the mediating effect. Finally, whether both the independent variable and the intermediate variable have an effect on the dependent variable is tested. If the event strength has no significant effect on deviant behaviors, and emotional exhaustion has a significant effect on constructive deviant behaviors and destructive deviant behaviors, it is judged as a complete mediating effect. If the independent variable has a significant effect on the dependent variable, it is judged to be a partial mediator. In this study, AMOS was used to test the BOOTSTRAP mediation effect. The analysis results are shown in Table 6. The total impact of event strength on constructive deviant behavior is 0.571, the range of bias-corrected is 0.471-0.655, and the range of percentile is 0.47-0.655, both excluding 0, it indicates that the event strength has a significant overall positive effect on constructive deviant behavior; The total impact of event strength on destructive deviant behavior is 0.521, the range of bias-corrected is 0.408-0.627, and the range of percentile is 0.408-0.627, all excluding 0, indicating that event strength has a significant positive effect on destructive deviant behavior. The first step of the mediating test passed. By examining the indirect effect of event strength through emotional exhaustion on constructive deviant behavior, the indirect effect is 0.059, the range of bias-corrected is 0.006-0.125, and the range of percentile is 0.003-0.122, all excluding 0, which shows that the event strength has a significant indirect positive effect on constructive deviance through the mediating variable emotional exhaustion; In addition, the indirect effect of event strength on destructive deviance through emotional exhaustion is 0.087, the range of bias-corrected is 0.018-0.168, and the range of percentile is 0.016-0.167, all excluding 0, indicating that event strength has a significant indirect positive effect on destructive deviance through emotional exhaustion. Thus, the second step of the mediating test passed. Finally, examine the direct effect of event strength on constructive deviance. The direct effect is 0.512, the range of bias-corrected is 0.397-0.609, and the range of percentile is 0.402-0.611, all without 0, indicating the event strength has a significant direct positive effect on constructive deviance, so it shows that emotional exhaustion has a partially mediating role between event strength with constructive deviance, so H2a is valid; The direct effect of event strength on destructive deviance is 0.435, the range of bias-corrected is 0.31-0.542, and the range of percentile is 0.31-0.541, all excluding 0, indicating that event strength has a significant direct positive effect on destructive deviance. Therefore, it shows that emotional exhaustion has a partial mediating role between event strength with destructive deviance. H2b is thus valid. --- Testing the Effect of Emotional Exhaustion on Deviant Behavior It can be seen from Table 7 and Figure 2, the standardized path coefficient of emotional exhaustion on innovative constructive deviance is 0.408, p < 0.001, which has a significant positive effect, therefore H3a is not valid. The standardized path coefficient of emotional exhaustion on challenging constructive deviance is 0.672, p < 0.001, which has a significant positive effect, so H3b is not valid. The standardized path coefficient of emotional exhaustion on interpersonal constructive deviance is 0.232, p = 0.015, which has no significant effect, so H3c is not valid. The standardized path coefficient of emotional exhaustion on organizational destructive deviance is -0.711, p < 0.001, which has a significant negative effect, so H3d is not valid. The standardized path coefficient of emotional exhaustion on interpersonal destructive deviance is 0.482, p < 0.001, which has a significant positive effect, so H3e is valid. --- Conclusions and Discussions --- Research Conclusions This research is based on the event system theory, starting from the event strength perceived by the public, and exploring the changes in employees' psychological state and their workplace behaviors after being stimulated by the external environment. In this study, when China's novel coronavirus pneumonia was gradually brought under control and enterprises gradually resumed work and production, 628 valid questionnaires from employees of enterprises in the Tianjin-Beijing-Hebei region were collected online, and our empirical research found that: Female employees are more sensitive to the perceived event strength of the novel coronavirus pneumonia than male employees; Young and middle-aged employees aged 30-40 have the highest level of emotional exhaustion and the greatest pressure, and employees over 50 have the least emotional fluctuations affected by the epidemic; The higher the education level, the higher the perception of event strength, leading to higher levels of emotional exhaustion; The emotional exhaustion of employees in the workplace is polarized by the annual family income; The event novelty, event disruption, event criticality of the novel coronavirus pneumonia has a positive effect on employees' emotional exhaustion; Emotional exhaustion plays a partially mediating role between event strength with constructive deviance and destructive deviance; Although employees affected by the epidemic are immersed in emotional exhaustion, their innovative constructive deviance and challenging constructive deviance have generally increased. Employees actively or passively must break the original working methods, adapt to the external environment, and improve work efficiency. Emotional exhaustion of employees has no significant impact on interpersonal constructive deviant behaviors. Due to the limitation of working space, employees cannot interact in time. The interviewed employees mostly adopt methods such as reducing their own role conflicts, dividing work-family boundaries, adapting to new working methods, improving economic security, and even challenging new fields of work to deal with the impact of the novel coronavirus pneumonia on family life, and balance their anxiety [25]; Emotional exhaustion has a significant positive effect on interpersonal destructive deviance, but has a negative effect on organizational destructive deviance. Affected by the epidemic, global economic development has stagnated, and a large number of companies have implemented largescale layoffs in order to reduce operating costs, resulting in a surging unemployment rate. Freelancers were forced to close their shops and suffered heavy economic losses. In the face of this major global public health incident, under the severe economic situation, young and middle-aged employees are burdened with financial pressures such as mortgages, car loans, and family burdens. Even if employees are nervous, fearful and anxious, they will not implement organizational deviant behaviors such as being late, leaving early, or leaving. On the contrary, most of the interviewed employees will work harder, hoping not to be laid off by the company, and resolve their inner dissatisfaction through interpersonal deviance tools such as complaints and cliques [26]. --- Discussions This study found that under the stimulation of major events outside the organization, female employees, young and middle-aged employees, highly educated employees, and employees with higher or lower income levels were more sensitive to the epidemic situation, and emotional exhaustion positively affected interpersonal destructive deviance. This is consistent with the findings of theoretical research on emotional events [27]. The emotional event theory believes that employees will inevitably encounter events that prompt them to produce positive or negative emotions at work. Due to emotional mobilization, the behavior of employees will change, but different individuals have different emotional responses to the same event. In the comparative study of job burnout between management and non-management, it is found that emotional exhaustion of management employees is higher than that of non-management employees, and the level of emotional exhaustion of female employees is higher than that of male employees [28]. Workplace experience will affect employees' motivation, behavior, and work performance through the medium of emotions. When employees encounter setbacks, they will produce negative emotions and show more destructive deviant behaviors and aggressive behaviors at work [29]. When employees lack organizational support, managers do nothing, and colleagues shirk their responsibilities, they will suffer emotional exhaustion, and take actions to repair themselves [30], venting and retaliating against organizational property, organizational environment, and organizational members [31], through manifestations such as stealing, destroying public property, insulting colleagues, sabotage, arriving late and leaving early, resigning and other destructive deviant behaviors [32], but positive emotions can trigger more creative thinking and behaviors [33], making individuals more focused and flexible in problem-solving, more willing to communicate and collaborate with other members of the organization, and improve work efficiency [34]. Positive emotions can motivate employees to break the stereotypes, dare to challenge, and better adapt to the external environment [35]. The contribution of this article lies in the exploration of the unconventional changes in employee emotions and workplace behaviors under the epidemic situation based on the event system theory. Normally, when the employees in the organization have negative emotions, their innovation consciousness gradually declines, and most employees tend to stick to the stereotypes. From previous studies, it is also difficult to judge that major events outside the organization can prompt employees to actively innovate and challenge themselves. However, the current research results reveal that in the context of the outbreak of the global novel coronavirus pneumonia, employees have internalized emotional exhaustion into work motivation, promoted innovative constructive deviance, challenging constructive deviance, cherished job opportunities, and reduced destructive organizational deviance . This article supplements and enriches the relevant research on workplace deviant behaviors, and provides suggestions for companies to reasonably ease employee emotions and balance labor costs. --- Management Implications Some actions companies can take to address this situation are: 5.3.1. Establish "Employee Care Plan" to Relieve Employees' Negative Emotions In the face of a sudden epidemic situation, enterprises should popularize epidemic prevention knowledge, provide epidemic prevention supplies for employees who come to work during the epidemic period, and implement isolated office work. Enterprises should also pay special attention to female employees, young and middle-aged employ-ees, highly educated employees, high-income groups and social bottom groups, so as to control the level of human capital in enterprises. According to the physical condition of the women in lactation period, the home office mode should be adopted to complete the task. Young and middle-aged employees shoulder the economic pressure of caring for the elderly, raising children, and even housing and car loans. They are also the backbone of the enterprise. Young and middle-aged employees should be encouraged to turn the pressure into motivation. According to the company's own situation, give transportation subsidies, meal subsidies, distribution of daily necessities, food and other welfare, show the care of the enterprise for staffs. Cultivate the senior managers of enterprises to establish the awareness of "community of common destiny" and the overall situation of the country, the nation and the enterprise [36]. Try the mode of "sharing employees" in cooperation among enterprises [37], reduce the labor cost of enterprises, provide economic security for low-income people, and reduce the turnover rate. --- Open a Long-Term Communication Channel to Give Employees the Space of Independent Decision-Making Using Ding Talk, WeChat groups and enterprises' internal office platform, realize the cooperation and information sharing among departments. Divide the work tasks, form working groups, specify work nodes, submit daily work progress, and establish flat management mode. Pay attention to results, relax the work process, set necessary restrictions, time schedule, and strategic deployment, give employees flexible space to work, and encourage employees to divide work family boundaries [38]. Break the bureaucratic atmosphere in the workplace and realize vertical management. Accept the creativity of employees, recognize the initiative changes made by employees to better complete their work. A special fund should be set up to give monetary or material rewards to employees who have made outstanding contributions to the organization. --- Research Limitations Although this article reveals the influence mechanism of employees' perception of the external events out of organization on their workplace behavior, it still has certain limitations. First of all, our research uses a questionnaire survey. We have proposed that the relationship between event strength, emotional exhaustion, and deviant behavior, the causal relationship may be reversed. Longitudinal data can be used in the future to eliminate the problem of false regression of hypothetical relationships. Secondly, the model proposed in this study is not comprehensive. From the related literature, we find that emotional exhaustion is not the only factor that affects the event strength on workplace deviant behavior. There are other factors that can be explored, such as corporate culture, leadership style and employee competence, etc. Finally, the data we collected are only based on the Tianjin-Beijing-Hebei region. Therefore, some of the findings of this study may have certain differences due to different regions and different strength of the epidemic, which may limit the universality of our research results. Therefore, the sample scope should be expanded in the future to further test the cross-regional differences. Author Contributions: All authors contributed to this work. Specifically, Y.L. developed the original idea for the study and designed the methodology. Y.L. and H.Z. completed in the survey and drafted the manuscript, which was revised by Z.Z. All authors read and approved the final manuscript. ---
Background: Since the beginning of 2020, the Corona Virus Disease has broken out globally. This public health incident has had a great impact on the work and life of the public. Aim: Based on the event system theory, this article explored the influence of the "COVID-19" event on emotional exhaustion and deviant workplace behaviors. Methods: This survey's objects are employees working in Tianjin, Beijing, Hebei affected by the epidemic. Using the questionnaire star, the online platform of the Marketing Research Office of Peking University and "snowball" methods 700 questionnaires were collected. Results: The response rate was 89.71% (n = 700). Female employees are more sensitive to the perceived event strength of the novel coronavirus pneumonia than male employees (F = 10.94, p <0.001); Employees aged 30-40 affected by the epidemic have the highest level of emotional exhaustion (F = 5.22, p < 0.01); A higher education level leads to a higher level of emotional exhaustion (F = 4.74, p < 0.01); The emotional exhaustion is polarized with the annual family income (F = 4.099, p < 0.01). Conclusions: The novelty, disruption, criticality of the Corona Virus Disease event has had a positive impact on the emotional exhaustion of employees in the workplace; Emotional exhaustion plays a partly mediating role between event strength with constructive deviant behaviors, and destructive deviant behaviors. Emotional exhaustion has a positive effect on creative constructive deviant behaviors, challenging constructive deviant behaviors, and interpersonal destructive deviant behaviors. Emotional exhaustion has a negative impact on organizational destructive deviant behaviors, and has no significant impact on interpersonal constructive deviant behaviors.
Introduction The world tourism industry is facing the effect of the Covid-19 pandemic. Tourists' travel risk and management perceptions are crucial matter in their decision to travel destinations during the ongoing uncertainty of Covid-19 epidemic. Tourists' travel risk and management perceptions can influence their psychological behavior for travel to destinations [1,2] Tourists can view their travel risk and management issues differently due to the spread of the existing pandemic. Tourists will avoid visiting destinations if they consider it risky [3]. Tourists' travel risk and management are associated with tourism destinations, which is multidimensional where the outcomes are uncertain due to the impact of Covid-19. Therefore, it is difficult to recognize the common risk and management dimensions for developing a theoretical foundation based on the tourists' risk and management perceptions and incorporating their outcomes. However, due to having a crucial concept of travel risk during the Covid-19 pandemic, this study has paid attention to explore and evaluate the tourists travel risk and management perceptions associated with the tourism attractions. The Covid-19 pandemic has ruined all the previous narratives on development. Lockdowns at the largest scale in human history have imposed by governments around the world to control the spread of the pandemic. The consequences of this pandemic could change many aspects of human life and business including tourism management as almost half of the global population adopted restrictions on movement at an unprecedented scale. The Covid-19 is an infectious disease caused by a new strain of coronavirus. Co stands for corona, Vi for a virus, and D for the disease. This disease refers to as 2019 novel coronavirus or 2019-nCoV. The impact of the novel Covid-19 pandemic is expected to have antagonistic results on the tourism sector, and the economy worldwide [4]. The economic estimations are foreseeing diminished financial development and showing negative attitudes to residents from countries most intensely affected by the Covid-19 pandemic [5]. The Covid-19 pandemic started at Wuhan in China in December 2019 [6,7] and other countries in February 2020. It has various effects and countries around the world are looking for a sustainable development approach to mitigate its negative impact. The pandemic is calamitous for recovering the economy of every country, nonexistent the travel industry, and social angles including long-term health issues in those affected by the infection and losses the friends and family. The effect of Covid-19 has mental effects [8] and it appears to be essential to identify them appropriately and address these issues to directly control the spread of infection [6]. Societal wellbeing or safety measures through lockdowns can control the spreading of infections [5]. However, when such safety measures are excessively strict, they can have negative impacts on developing the tourism industry, interruption of economic development, and increase the unemployment rate. It is reported that the business world today is directly or indirectly impacted by different external factors such as financial, sociocultural, global, political, and technological [4]. The changes in these factors lead to a change in business performance in industry in the region-specific or worldwide. The world is aware of the Covid-19 pandemic and its social outcomes remain ambiguous [9]. Although China, the United States and other developed countries have produced vaccines and started vaccination, most of the developing countries are struggling for getting the vaccine for protection against the outbreak of the Covid-19 epidemic. There is a lack of healthcare safety and security in many countries regarding handling Covid-19 patients, lack of doctors, a lacuna of vaccine, and testing facility. Covid-19 is a global phenomenon, and it may appear soon as an established external factor in curricula on strategic management for business performance and emerging tourism marketing. Other factors are mostly controllable by social frameworks, and individuals [4]. Pandemics are generally uncontrollable because they appear suddenly everywhere. The travel and tourism sector are particularly motivated by changes in external factors and given the idea of political and financial systems. The travel industry involves various sectors and contributes to these areas' advancement and the global value of tourism management. The effect of the Covid-19 pandemic on the tourism destination, tourists' behavior, and their preference is irrespective of district or nationality. The earlier studies [9,10] have confined the connection between pandemic and tourism regarding risk. Few studies [11] analyzed the tourism restrictions on the spread of the Covid-19 pandemic and explained how destinations decided to react to a pandemic. Travel and tourism are one of the largest industries all over the world [12,13], however, despite this industry, the hospitality and tourism industry is currently highly sensitive to significant shocks . It is crucial to investigate how the tourism industry will recover from the effect of the Covid-19 pandemic. The rapid transmission and high mortality rate of the Covid-19 pandemic lead to the scientific community monitoring its spread of infection [14]. The pandemic encourages the continuation of social quarantine and adverse financial effects. The clinicians and researchers have expressed their concerns about the negative effects of the Covid-19 epidemic on the health of people and behaviors [15]. Recently a few studies discussed Covid-19 from healthcare perspectives [5,8]. Some studies focus on the risk management of the Covid-19 pandemic [16,17]. Some researchers [18] focus on the travel and tourism crisis while others [10] proposed the necessary procedures that prevent potential biosecurity threats because of worldwide pandemic outbreaks. There is a study that [19] focused on the Covid-19 pandemic and its effect on Chinese residents' lifestyle and travel, which leads to enlightening long-term patterns of behavior and tourism destination. A few countries have made explicit strides in suspending their visa on arrival strategy and initiating strict travel bans to control the spread of the pandemic. Another research study [20] reported that the Covid-19 epidemic has carried economic collapse to Singapore, Bali, Barcelona, Rome, and other counties that were once tourists' attractions. The effects of this outbreak on the travel and tourism industry in the world have been extremely debated by industry practitioners, the tourism department of the government, and the academic community. Most of the countries all over the world are decided to close their borders and postpone their airline's services due to the Covid-19 pandemic. United Nations World Tourism Organization reported that there is a global crisis in the tourism industry and Covid-19 is responsible for a decline of international tourist arrivals that estimate the losses of US$300-450 billion [19]. This is surprisingly more terrible than the effect of SARS in 2003 [21]. The Covid-19 pandemic has affected many countries and the global tourism industry faces terrible situations in which business has been closed, lives have been lost, and people are on high alert for social safety. The earlier studies [8,9,22,23] indicate that the academic community timely provides research for everyone's benefit over the healthcare, sociologies, and hard science. Concerning this research, the existing study aims to investigate the social impact of the Covid-19 epidemic on tourism destination and tourists' behaviors as well as their preferences during this pandemic. This investigation likewise explains how global travel and hospitality practices are probably going to change because of the pandemic. This study depends on the synthesis of early literature and sources of published news and reports related to tourism management, marketing, healthcare, and tourist behavior. Based on these, the study draws a conceptual model for empirical assessment. For the post-Covid-19 and business recovery, these insights will assist tourism operators, managers, marketers, and industry practitioners tailor their tourism products and services. --- Literature review --- Underpinning theory This study uses the concept of pathogen-stress theory [24] to evaluate the travel risk and management perception due to the Covid-19 uncertainty and determining human behaviors in societal issues. Some authors have [25] explored the influence of pathogen thereat in the context of Covid-19 epidemics. The personality traits are predicted by a parasite-stress theory of human sociality that highlights the infection risks related to the interaction with conspecifics [24,26]. The travel risk and management perception refer to the risk of human-to-human transmission. The infection risks are connected to the openness of human contact. The increased contact with many group members implies a higher risk of human-to-human transmission. According to this theory, when people develop in a parasite-infested environment, they become less open to visitors, less curious, less exploratory and reduce their chance of infection. This theory is not only emphasized cultural differences but also cultural difference over space such as between different human populations. Generalizing the concept of pathogen-stress theory, this study explores the effect of Covid-19 epidemic and its impact on travel risk and management perceptions. --- Effect of Covid-19 pandemic Covid-19 is a new pandemic that first erupted in December 2019 in China and spreads rapidly across the world through human-to-human transmission. Most countries all over the world are instituting short-term travel restrictions to stop the spread of infection which increase the concern caused by the Covid-19 pandemic on the tourism industry worldwide [5]. Researchers must think about the previous disaster of the 2003 SARS outbreak [27] and the 2004 tsunami in Sri Lanka [28] for lessons on how to manage the crisis from the disaster [19]. Tourists prefer an inclusive tourism package, safety and security when travelling to popular destinations. They want to avoid risk and crowded tourism destinations, and they may decide not to visit destinations if their destination preferences diminished well-being after the outbreak. The covid-19 pandemic is already brought severe concerns to the world tourism industry and niche market. United Nation [21] reports that the recent circumstance of the tourism sector is very worse due to the pandemic. This crisis expanded in the world and Covid-19 pandemic easily immobilize international tourists' emotional stability. The impact of Covid-19 epidemic is greatly affected tourists' travel risk and management perception. Researchers [19] suggested the practitioners for exploring the tourists' travel behavior towards tourism destinations. The discussion of existing literature evidence that there is no empirical examination that focuses on the impact of Covid-19 pandemic on tourists' travel risk and management perception. Thus, we propose the hypothesis: H1. The fear of Covid-19 pandemic affects the tourists' travel risk and management perception. --- Tourists' travel risk and management perception Travel risk and management perception refer to the evaluation of a situation concerning the risk to make travel decisions in destinations [1]. Travellers' risk and management perception is a key component for tourism destinations. Risk management refers to the practice of recognizing potential risks of the travel and tourism industry due to the current pandemic in analyzing, improvement and taking preventive steps to reduce the risk. Many countries of the world started to recover from the crisis of tourism events [2]. Tourists' travel arrangement should be organized to minimize the risk and stress of tourists. For example, tourists should purchase insurance when they booked trips to destinations. Researchers [29] stated that the travel and tourism industry is vulnerable against risk including crises events, epidemics, pandemics, and other risks that challenges tourists' safety. The previous studies indicated that risk restricts travel is negatively affect tourism demand [30][31][32]. Other authors [33] found that perceived risk negatively affects tourists' destination perceptions. This study postulated that: H2. Tourists' travel risk and management perception have a significant impact on risk management. Travel risk indicates the cancellation of flights due to the tourists' travel restrictions, travel risk and management perceptions. The travel cancellation leads to tourists' negative emotion, anxiety and disappointment [34]. In line with this, service delivery or service efficiency is crucial to tourism initiative performance. Service failure could lead to a negative impact on travel destinations. The previous studies indicated that tourists' travel risk and management perception may negatively influence tourists' decision making [35,36]. Professional service delivery and timely response could reduce tourists' travel risk and management perceptions. Studies [36] identified that some restaurant refused to provide service delivery to Chinese people. This racial discrimination may lead to tourists' having an increase in travel risk and management perceptions towards destinations. Research study [4] stated that public health crisis can affect tourists' dining behavior. Thus, tourist should avoid eating in restaurants and order delivery to minimize social interaction and avoid unnecessary contact with people during the pandemic. Therefore, this study postulated that: H3. Tourists' travel risk and management perception have a significant relationship with service delivery. The travel behavior of people changes at the individual level due to the Covid-19 pandemic in the globe [37]. It is difficult to change the transportation pattern in the public areas and crowded public transits in the country. Articles [4] reported that bike or ride-sharing services could be alternative to more crowded transit options in the wake of Covid-19 pandemic. Social distance is important to avoid crowded areas, thus, the availability of different transportation options within the country can help tourists to decide to visit their desired tourism places. Another study [38] stated that the transportation network is vulnerable to disturbance due to movement restrictions. Research work [39] indicated that the use of public transport signifies a higher risk of infection of Covid-19 in Budapest. This study proposed the following hypothesis: H4. Tourists' travel risk and management perception are positively related to travel pattern. The distribution channel refers to the traditional travel agencies to online agents while purchasing tour packages, booking hotels and buying ticket [4]. Distribution channels are the intermediaries through which a product and services pass to the end customers. Authors [40] stated that customer behavior has a significant link with purchase behavior, destination choice, experience sharing, and information searches. Information technology can easily reduce an individual's travel risk and management in person-to-person communication [41]. For instance, people can work at home without travelling to the office, involve with distance learning, order products and services online, and performing banking transaction virtually. People use technology for travel-related purposes such as booking holidays, offering instant vendor feedback, and comparing travel destinations, which lead to reducing travel risk and management perceptions. Therefore, we proposed that: H5. Tourists' travel risk and management perception have a significant influence on distribution channels. Covid-19 spreads through human-to-human transmission, thus, it is crucial to avoid overpopulated destinations. Overpopulated destination refers to the neologism that indicates the overcrowded people on a holiday destination. A collaborative work [42] indicated that pathogen threats make people alert and avoid overpopulated destination. This tendency will initiate a mind shift in people travel behavior and reduce the tourists' travel risk and management perception in the avoidance of overpopulated destination [43]. It's reported that social distancing can assist to prevent infection of Covid-19 epidemics [44]. According to several studies [4,45,46] tourism locations are plagued by overcrowded travelers, thus, tourism operators can identify how the best way to manage tourist flows to make sure safety, well-being and risk perception of visitors. This study proposes that: H6. Tourists' travel risk and management perception have a significant impact on the avoidance of overpopulated destinations during Covid-19 pandemic. The Covid-19 pandemic has made people conscious of hygiene and safety. People are concerned about their safety and hygienic need in public transports, hotels and recreational sites [47]. To reduce the symptom of people of Covid-19 epidemics, face masks use can be helpful for the hygiene and safety of people [4,48]. Covid-19 pandemic have greatly affected the travel decision of tourists and their health safety and hygiene [4]. It implies that safety and hygiene can be a significant factor for the travel risk and management perception of tourists. Because the risk mostly belongs to safety and hygienic including health-related issues. The potential tourists are generally like to seek destinations' safety and hygiene, cleanliness, established infrastructure, and high-quality medical facilities during the Covid-19 pandemic [4]. Thus, this study postulated that: H7. Tourists' travel risk and management perception have a significant impact on destinations' hygiene and safety. Based on the existing theoretical and empirical assessment, this study proposes a conceptual model . --- Methodology Survey instrument. This study uses an explicit statement for measuring respondents' responses to the given factors of Covid-19 epidemic, tourists' travel risk and management perceptions and their social traits. Studies [49] supported that this method is suitable for the respondents for an understanding of the survey measurement items. This study uses multimeasurement items for all constructs due to overcoming the limitations of using a single item. Specifically, five measurement items were modified from [8] and [19] for evaluating the effect of Covid-19 pandemic. A total of four questions measuring travel risk and management perception were adapted from previous studies [19,50]. The five measurement items used to evaluate risk management considering tourists' travel risk management perception to visit the destinations were modified from [5] and [19], while the three questions related to service delivery were adapted from [19]. Three measurement items referring to [19] were designed to evaluate transportation patterns, and three questions based on [41,51] and [19] measured to assess distribution channels. Four items were modified from [44] and [4] to measure the avoidance of overpopulated destinations, while four items developed from [4,48] to evaluate hygiene and safety. All measurement items under the constructs were assessed using a sevenpoint Likert scale from strongly disagree to strongly agree . --- Survey administration and sample The data were collected from a self-administered questionnaire to examine the conceptual model of this study. The questionnaire of this study was pretested to certify the validity of the survey instrument. To ensure content validity, the researchers of this study conducted a pilot test among 50 international tourists. The reliability test was employed to identify Cronbach's alpha value of all constructs and confirm the reliability of the survey questions [52]. In this study, an English version questionnaire was used for data collection as most of the participants were educated, and they were able to answer the survey questions. The questionnaire was delivered through an online survey using the Google platform tools and highlighted the main purpose of this study. We described the procedure of the survey to the respondents before participating in this study. The researchers of this study politely requested respondents through the online platform, explained the purpose of the study and asked for their consent to be part of participants in this study. We ensured to the respondents that the data would be collected for academic only and no other authorities would have access to this information. Also, we confirmed to the respondents that they would remain anonymous because participants were not required to provide their name, address and mobile numbers. The survey questionnaire of the Google platform link was shared on social media for collecting data. Also, the researchers of this study collected an email address from the respondents through Linkedin and sent them a Google platform link to the survey questionnaire. The online questionnaires could be completed with the use of respondents' smartphone, laptop/computer. The complete survey questionnaire consisted of 63 items and they took approximately 20 minutes to complete. We adopted the cross-sectional design and collected data from 731 international tourists via an online survey from the 2nd week of April to the 1st week of July 2020. Before collecting the data, an ethical research approval letter was obtained from the Jiujiang University Research Ethics Committee . An introductory letter and consent form was also obtained from the ethics committee, which clearly expressed the reason for this study to acquire consent from the respondents for conducting the study. Online survey approach was used for collecting data from the respondents. We sent a consent form to the respondents whether they are willing to participate in this study. The respondents of this study are individual tourists who visited different tourism destinations around the globe. In line with this, we used a representative sampling method for collecting data from the different geographic areas such as Middle East, Asia, Africa, Australia, Europe, and America. A representative sample can cover a part of the population and allows to approximate the entire population. Studies [53] indicated that a representative sample can accurately reflect the characteristics of the large group. A total of 1000 questionnaires with consent form were sent using a Google platform and 731 were returned, confirming a return rate of 73.1%. A total of 1000 questionnaires with consent form were sent using a Google platform and 731 were returned, confirming a return rate of 73.1%. Fifteen returned questionnaires were found to have only partially completed and thus they were not usable. The usable response rate was approximately 71.6%. The respondents' answers to the open-ended question were hand-coded and checked by the researchers of this study. In this study, the minimum sample size was according to prior power calculation. We considered recruiting at 716 respondents because this would provide satisfactory power 0.80 to detect expected correction coefficient 0.20 . We considered a large sample size in this study since this could increase the statistical power for detecting poor effects and strengthen the robustness of the results. --- Data analysis method In this study, we have used SmartPLS3.0 software for testing the hypothesis relationship among the indicators. The partial least square method is a more appropriate statistical technique since it can prevent specification errors and improve the reliability of the results, as well as provide better outcomes and minimize structural errors [54]. This method is suitable for examining the hypothesis relationships of the study [55]. The PLS method consists of 2 steps, for example, measurement model and structural model [56], which has been analyzed in this study. --- Multivariate normality and common method variance Structural equation modeling using the partial least square method is not related to multivariate normality in data, because it is a non-parametric assessment instrument [57]. It is [58] suggested that multivariate data normality can be tested using the online tool of web power to estimate data normality. We run the web power and the result revealed that the data set is not normal because [59] multivariate coefficient p-values were less than 0.05 [60,61]. In social science study, common method variance is normal due to the data collection procedures. We run [62] one-factor test [63] to evaluate the effect of common method variance on the constructs of the study. The result of one-factor Harman's test revealed that common method variance is not a critical matter in this study because the main factor explained 33.45% variance, indicating less than the suggested limit of 50% [64]. --- Data analysis --- Demographic characteristics The majority of the respondents consisted of male whereas female was 33.3%. In terms of the marital status of the respondents, 59.9% was married followed by a single and divorced . The majority of the respondents had a bachelor's degree followed by a master's degree , a secondary school/diploma degree 14.0, and a PhD . The results indicated that around 87.5% of respondents were not infected by the affected Covid-19 pandemic whereas 1.2% were infected by Covid-19 and 11.3% of respondent do not know whether they were infected by Covid-19 or not. In terms of travel purpose, the majority of the respondents travel for leisure/holiday or shopping purposes, which followed by education/conference , healthcare , others and business . The following are the percentage of age group: between 18-29 years old , between 30-39 years old , between 50-59 years old , and above 60 years old . The majority of the respondents were a private employee followed by a government employee , and unemployed . The following are the percentage for monthly income of the respondents: less than USD2000 , between USD2001-USD5000 , between USD5001-USD7000 , between USD7001-USD10000 , and above USD10000 monthly income. The majority of the respondents in this study were from Middle East , followed by Asia , Africa , Australia , Europe , and America . --- Measurement model analysis In this study, we examined two types of validity such as convergent validity and discriminant validity to evaluate the measurement model. The convergent validity is assessed with two major coefficients such as composite reliability and average variance extracted . To measure the convergent validity, the factor loading of each construct should be considered and compared to a threshold. Studies [55] reported that the loading should be greater than 0.70 to measure convergent validity. Researcher [56] postulated that the items of each factor loading lower than 0.40 is required to consider for elimination. The findings revealed that the majority of the indicator loadings on their corresponding latent variables are greater than 0.80 , indicating a higher convergent validity of the model. The CR coefficient was used to measure the construct reliability. The result showed that the value exceeded 0.80 for all latent variables, which indicates the acceptable construct reliability. The results of AVE of all latent variables exceeds the threshold of 0.50 [56], which signifies that the convergent validity of the measurement model is acceptable. The Cronbach's alpha value exceeded the cut-off point 0.70 [54], which recognizing that internal reliability attains the acceptable level. The rho-A value exceeded that threshold 0.70 and the variance inflation factor sowed lower than 3.3, which indicating that there is no multicollinearity issue in the model. Discriminant validity is the extent to which each latent variable is distinct from all other variables in the model [56]. Researchers [55] argued that the square root of the AVE for each variable should be higher than all of the relationships among the variable and other variables in the model. Table 2 showed the square roots of the AVE for the variables along the diagonal and the correlations among the indicators. The findings revealed that the square root of AVE is higher than all other values in the same row and column, which indicates that the model meets acceptable discriminant validity. We also considered the Heterotrait-Monotrait Ratio to estimate the discriminant validity of the model [65]. The results indicated that HTMT is lower than 0.90, which indicating that the discriminant validity meets the acceptable level [66]. --- Structural model analysis The model's predictive accuracy was estimated based on the explained variance portion , whereas the R 2 value of travel risk and management perceptions, risk management, service delivery, transportation patterns, distribution channels, avoidance of overpopulated destinations, and hygiene and safety were 0.628, 0.553, 0.521, 0.352, 0.668, 0.523, and 0.454 respectively. Based on [67], a non-parametric bootstrapping method was used to test the hypothesis relationships. The findings revealed that the effect of Covid-19 pandemic has significant impact on travel risk and management perceptions , and tourists' travel risk and management perception has significant impact on risk management , service delivery , transportation patterns , distribution channels , avoidance overpopulated destinations , and hygiene and safety , thus, hypothesis H1-H7 are accepted . The effect size was estimated using f 2 values. Cohen [68] reported that f 2 � 0.02, f 2 � 0.15, and f 2 � 0.35 present small, medium, and large effect sizes respectively. The findings revealed that hygiene and safety , transportation patterns , and avoidance overpopulated destinations have a high effect size, whereas service delivery , risk management , and ravel risk perception have a medium effect size but distribution channels have a small effect size. The Q 2 values for travel risk and management , risk management , service delivery , transportation pattern , distribution channel , avoidance of overpopulated destination , and hygiene and safety were all larger than zero [69], indicating a predictive relevance of the construct. With respect mediating effects, the findings revealed that travel risk and management perception mediates the effect of Covid-19 pandemic on risk management , service delivery , transportation patterns , distribution channels , avoidance overpopulated destinations , and hygiene and safety , therefore H8a-H8f are accepted . --- Discussion In this study, we aimed to evaluate the psychometric properties of the Covid-19 pandemic, a newly developed scale designed to measure the aspect of international tourists' travel risk and management perceptions and its social outcomes. The results of the structural model assessment revealed the hypothesis relationships, which indicated that the Covid-19 pandemic has a relationship with travel risk and management perceptions. It implies that due to the spread of the Covid-19 pandemic across the globe, the majority of the countries were set up short-term travel limits to control the mass panic. By conducting a review of the previous study indicated that there is a relationship between perceived risk for disease-related factors and Covid-19 pandemic [13]. The existing study results identified that the effect of the Covid-19 pandemic has greatly affected risk management, service delivery, travel pattern, distribution channel, avoidance of overpopulated destinations, and hygiene and safety through the tourists' travel risk and management perceptions. The tourists believe that Covid-19 pandemic has created travel risk and management perception and reduce their travel plant to destinations. Data analysis of this study specifies that tourists' travel risk and management perception is greatly associated with risk management. In service research the Covid-19 pandemic context, risk management has been marked as a significant factor affecting an individual's belief about controlling threats of a pandemic. The previous study [4] supported that tourists' behavior can lead to risk management for destination infrastructure and medical facilities, destination image, and trip planning. The result highlight that travels risk perception is associated with service delivery. This finding is related to [70] which found that there is a significant relationship between Covid-19 pandemic and service delivery. Tourists can avoid eating and drinking in restaurants. There is an alternative solution for people who can order delivery or takeout food to minimize interpersonal interaction. This study expands the existing knowledge by examining the effect of travel risk and management perception on travel pattern. This result is related to [4] who reported that travel pattern can lead to independent travel or small group tours, less group dining, promote destinations experiencing under tourism, and diversity such as novel outdoor activities, smart tourism, and nature-based travel. The findings indicated there is a positive association between travel risk and management perception and distribution channels. It infers that distribution Chanel can encourage people for nature-based travel and smart tourism to reduce the travel risk and risk management perception during the Covid-19 pandemic. some researchers have reported that people can use technology for travel-related purposes to reduce travel risk and risk management perception [9]. The empirical results indicated that tourists' travel risk and management perception is greatly associated with the avoidance of overpopulated destinations. The effect of Covid-19 pandemic spreads through human-to-human transmission, thus, avoidance of overcrowded destinations can be an alternative solution to reduce infection [44]. The overpopulated destinations can be minimized by using a short-term strategy of imposing travel restrictions for certain attractions destinations. Data analysis point out that the travel risk and management perception have a positive impact on hygiene and safety, which corresponds well with a previous study [4] which indicated that travel risk and management perception has greatly affected tourists' travel decision and their perceptions of hygiene and safety due to the spread of Covid-19 epidemic. In the context of service research, hygiene and safety judgments have been marked as an important construct affecting people's safety and security towards the service firm or customers' purchase intention of goods and services offered by the firms or service organizations. Tourists can purchase travel insurance when booking trips to confirm coverage in case of illness including Covid-19. Usually, the potential tourists are likely to express their interest in destinations' hygiene, safety, security, cleanliness, avoidance population density, and medical facilities when they decide for travelling to destinations. --- Implications The findings of this study indicated that Covid-19 has affected tourists' travel risk and management perceptions and its impact on risk management, service delivery, transportation patterns, distribution channels, avoidance of overpopulated destinations, hygiene and safety. Tourists believe that Covid-19 pandemic has created tourists' health anxiety and reduce their travel plans for destinations. These findings may help policy-makers and healthcare operators to manage maladaptive levels of concern due to Covid-19 pandemic, and to know who is more inclined to react unpleasantly towards the Covid-19 pandemic. Health practitioners can improve educational interventions while targeting international tourists for travel destinations. Tourists are worried about the spread of Covid-19 pandemic on their travel activities and travel-related preferences in the post-pandemic period. With the significant effect of Covid-19 pandemic, this study contributes key insights to assist tourism policymakers and practitioners improve effective strategies to enhance tourists' confidence after facing health risk crisis and travel risk and management perception towards travel destinations. The travel movement has become more selective, therefore independent travel and health tourism are crucial. Tourists can take fewer trips but spend longer in their picked destinations. These patterns will reduce the negative effects of the travel industry and lessen tourists' travel risk and management perceptions. Based on the tourists' travel risk and management perceptions and travel recuperation systems, travel attributes can move in the present due to the spread of Covid-19 epidemic. The disaster of Covid-19 pandemic teaches us not to visit overpopulated destinations and those people suffering from overcrowded destinations, there is a necessity to evaluate their travel planning and improvement to ensure sustainability. As tourists prefer quiet destinations for their tourism activities due to the Covid-19 pandemic, the global travel and tourism industry could benefit by paying attention to these craving. Due to these predicted changes in tourist behavior, the world tourism industry entails close academic attention. The travel and tourism industry is a fundamental part of the global economy, liable for a large number of occupations and billions of dollars in profit. Therefore, travel and tourism industry practitioners and policymakers should reevaluate tourists' behavior, travel industry policies, regulations, tourism operators' market, and tourism product development to promote continuous sustainability. The existing global health crisis has an unprecedented impact on the travel and tourism industry due to the spread of Covid-19 pandemic. Tourists' travel risk and management perceptions and their impacts on the tourism market or society , need a top to bottom investigation to empower the tourism industry experts, and policymakers to build up a more adjusted industry. Tourists' travel risk and management perceptions in the tourism industry will likewise prompt the development of new tourism markets that academics and tourism operators can investigate together. The findings of the existing empirical study are likely to shape theories on tourists' travel risk and management perceptions, tourists' behavior, marketing and management, both in the travel and tourism industry explicitly and in more extensive fields in general. The spread of Covid-19 flare-up has carried critical effects on society and industry. The travel and tourism policymakers and academicians should consider this pandemic tragedy and how it will advise tourism industry practices. The potential tourists concern about how they travel to destinations; thus, tourism practitioners should consider the strategies that mitigate the spread of a pandemic, public health crises, and ponder a plan that carries positive changes to the travel industry following this pandemic. For example, tourists should be needed to buy travel insurance when booking trips to guarantee coverage if there should be an occurrence of sickness, including a post-covid pandemic. Both international and domestic tourism needs to stress safety and health measures, and any tourism activities that make tourists feel safer to travel destinations and reduce their travel risk and management perception. The impact of Covid-19 pandemic should be considered within a global community. The spread of Covid-19 epidemic will have greater psychological, sociological and financial impacts if it is not eliminated quickly across the world. While society can recuperate effectively from financial interruption, including in global travel and tourism activities, following Covid-19 pandemic, the sociological and mental effects will be more stable. People should explore the current post-covid pandemic scene cautiously and sympathetically. --- Limitation and future study This study has several limitations despite its strengths such as large sample size and a relatively heterogeneous sample of the international tourists who visited the destination for leisure/holiday or shopping purposes, education/conference, healthcare, business and other purposes. This study surveyed with self-administrative questionnaire report measures that entail potential bias assumes that participants might be influenced by social desirability. Therefore, future study should aim to use other measures such as opinions of focus groups, which could support more in-depth analysis. This study employed a quantitative method that is inflexible to participants' subjective views on the effect of Covid-19 pandemic, thus, future study is suggested to ask qualitative assessments using in-depth interviews. The data was collected through the online platform, which much easier for the young generations compare to the older generations, and leads to a large number of a younger group of participants. A limited number of items were used to evaluate the constructs of the conceptual model and thus future studies should cover the large measurement items. The objective of this study mainly focuses on the impact of Covid-19 pandemic on tourist travel risk and management perception to assist the tourism industry to provide coping strategies in the face of the tourism crisis. Thus, future study should be conducted to investigate the factors that influencing tourists travel risk attitudes and risk management perceptions during and after the Covid-19 epidemic. This might be helpful for tourism managers and practitioners to pay attention to the control of Covid-19 crisis, and a systematic management strategy to promote the development of the tourism industry. --- All relevant data are within the paper and supporting information files. --- --- Methodology: Md. Atikur Rahaman. Writing -original draft: Muhammad Khalilur Rahman. Writing -review & editing: Md. Abu Issa Gazi.
This study aims to explore the impact of the Covid-19 pandemic on tourists' travel risk and management perceptions. Driven on the effect of the pandemic, we investigate tourists' travel risk and management perceptions and its effect on society using a sample of 716 respondents. The data was collected through social media platforms using a representative sampling method and analyzed applying the PLS-SEM tool. The findings reveal that Covid-19 pandemic has greatly affected travel risk and management perceptions. Travel risk and management perception had a significant association with risk management, service delivery, transportation patterns, distribution channels, avoidance of overpopulated destinations, and hygiene and safety. The results also identified the mediating effect of travel risk and management perceptions. The finding of this study contributes to tourism crises and provides future research insights in the travel and tourism sector and response to change tourists' travel risk and management perceptions in the post-covid recovery period.
INTRODUCTION Underpinned by technology and science, the discipline of engineering has traditionally been recognized as a profession and industry contributing to economic development and prosperity of nations. Recently however, engineering has been alternatively defined as more than the mathematical and scientific resolution of problems, thus embracing innovation and concern for human welfare . This revised positioning of engineering acknowledges that with prosperity often come negative impacts of technological advancement, such as pollution, climate change and displacement of local communities. Applying scientific knowledge has enabled understanding of the world and domestication of landscapes, resulting in increased commerce, goods and food supplies. Still, unanticipated changes in ecosystems suggest that future scientific applications should domesticate nature more judiciously to balance tradeoffs between ecosystems and provision of other services . While the technical focus of traditional engineering education has achieved strong competence among graduates, the development of a corresponding level of knowledge about the broader socioeconomic and environmental impacts of the engineering endeavour has been limited . Although some curriculum innovation in engineering has been noted, a popular approach to overcome the gap between technical and broader social aspects of student competency has been to incorporate additional stand-alone subjects into dense course content . Such subjects equip students with a broad, but relatively shallow understanding of the internal organisational and external societal consequences of industrial practice. need for engineering education to develop technically competent graduate engineers, who increasingly consider the diverse social and cultural needs of communities. Bourn and Neal , for example, argue that higher education must shape engineering solutions to match social, economic, political and cultural landscapes and the impact that local action has on global communities. The aim of this paper is to contribute to the discussion of developing professional engineers who seek social justice through critical thinking and reflective action. We argue that this discussion is beneficial to reformulate engineering education and practice to solve major societal problems using technology in a humane and ethical manner. This shift for engineering is necessary in response to society's progressively more complex technical issues, which are linked to recent community concerns over issues such as sustainability, poverty and education . The present paper defines the concept of social justice prior to reviewing the development of engineering education and proposed reforms from professional bodies. Drawing from theory in the humanities, the paper turns its attention to the broadened notion of literacy and how it relates to engineering education. Proposing new literacies for social justice, the paper presents a pedagogy of multiliteracies . Using empirical studies from the fields of scientific literacies and mechanical engineering, recommendations relating to the multiliteracies aspects of critical framing and transformed practice are highlighted. To deepen the critical and transformative aspects of multiliteracies for the field of engineering education, liberative pedagogy ) is integrated with a pedagogy of multiliteracies, suggesting an emerging model for educating reflective and critical engineers. --- DEFINING SOCIAL JUSTICE: LINKS TO ENGINEERING EDUCATION According to Adams, Bell and Griffin , social justice is an essential educational goal, aiming for learners to understand social differences and oppression in personal lives and at the societal level. However, Rountree and Pomeroy argue that the term is fraught with contested definitions. Popular since the industrial revolution, the term social justice is often used to debate the relationship between ruling classes and the new urban poor . Social justice commonly refers to the manifestation of human rights in the everyday lives of individuals at all levels of society . Thus, social justice is linked to environmental justice, which is commonly defined as the fair treatment and meaningful engagement of all individuals, , for developing and implementing environmental laws and policies . Paavola and Adger describe two broad approaches to justice. The first, or cosmopolitan approach, views justice as universal or unchanged by time and place. The second, communitarian, considers justice to emerge from relationships between members of communities, which are specific to a particular time and place . One advantage of the communitarian approach, argue Paavola and Adger, is that it facilitates our understanding of the diverse ways in which justice is addressed in communities. However, the communitarian approach has been criticized due to multiple interpretations of the term community. For example, the community for environmental justice has often been described as all human beings who will be affected by global climate policy and practices. Riley describes social justice as a familiar and elusive term in relation to engineering education; social justice is linked to normative perceptions of truth and fairness and how these concepts should be applied in society. Reminiscent of the work of Riley, the present paper suggests that for engineering, it is important to consider the differences between distributive and procedural justice. Distributive justice points to benefits and costs, broadly encompassing financial profits and burdens . Procedural justice includes the way in which parties are positioned for planning and decision-making and issues such as participation, recognition and distribution of power . Paavola and Adger note that distributive and procedural justice are interrelated; if a group cannot participate in planning and decision-making, its interests are unlikely to inform social , political or financial actions, which can aggravate, rather than reduce inequality. Therefore, the present paper argues for a holistic perspective which considers multiple aspects of social justice for engineering, such as oppression, exploitation, marginalization, powerlessness and cultural imperialism. In light of Young's diverse aspects, we define social justice by drawing on Riley , who suggests that social justice is ever-changing and: • grounded in context, place and time; • developed on both individual and community levels; • intended to achieve equality and respect the human rights of all people; • embedded in balancing the relationship between human kind, the environment and animals. While communities have benefited greatly from engineering, its impacts may increase the gap between social classes and damage the world's environmental health. As these trends amplify in capitalist societies, we argue that it is compelling to contemplate a new approach to engineering education-both in curriculum and pedagogy. This emerging approach to engineering calls for a profession that serves humanity in pursuit of social justice, rather than serving an increasingly corporate culture, driven predominantly by profit . Baillie and Catalano argue that to promote fairness and social justice, engineering students must develop complex insights to respond appropriately to communities while acknowledging that the profession can influence societies' lifestyles and ecosystems. In describing issues of justice, we contend that engineering must account for procedural justice linked to design, technical knowledge, benefits and costs; but engineering must also consider complexity of human experiences and distribution of power. In this vein, distributive justice must be connected to the raison d'être of engineering. --- DEVELOPMENT OF ENGINEERING EDUCATION: POSITIVISM, SPECIALISATION, PROFESSIONALISM AND REFORM With a desire to improve technology and use it effectively in society, engineering education has its origins in a positivist approach to science . The positivist approach defines scientific activity as objective and focussed on solving technical problems . Preceded by artistry and skilled trades of the Renaissance, , engineering approaches of the Industrial Revolution began to seek systematic explanations to practical problems. By the nineteenth century, trial and error experimentation was gradually replaced with an academic program aiming for strong theoretical grounding in sciences and mathematics. Emphasis on the mathematical and scientific specialisation of engineering continued through the twentieth century . Combined with mathematical and scientific theory, this perspective of technical expertise saw the American Engineering profession develop broadly in two ways: one which tied engineers intricately to business and one which encouraged autonomy . Denouncing industrial waste and working conditions in the engineering profession, the progressive movement, argued Layton, peaked after World War I. Since then, engineering has often been linked to dilemmas of social justice created by competing factors, such as technical specialization, professional autonomy, accountability to employers and social responsibility. Since the Second World War, technical specialisation has increased via numerous sub-disciplines of core engineering areas, such as Mechanical, Civil, Chemical, Electrical and Mining. While specialisation has resulted in spectacular expansion of technical knowledge, the impact for engineering education has been a dense technical curriculum, which is often disjointed from issues of social justice. Emphasis on theoretical knowledge has also been accentuated by an academic culture that understands its core endeavours as research and teaching. Of these core functions, the development of new technical knowledge through research is often more rewarding to academics who may have few career incentives for pedagogical innovation. This has reached the point where many undergraduate engineering students experience much of their programs as a series of disconnected theoretical subjects with high levels of contact hours, study load and examinations, which are segregated from social and environmental realities. Recent research suggests that despite curriculum reform in some engineering schools to deliver a broader range of graduate attributes, more work is needed to examine this cultural shift from a systemic and holistic perspective . If social justice is to flourish in engineering, developing a profound sense of ethical responsibility is necessary. Baillie and Catalano described such an ethical position as morally deep, where professional fulfilment "holds paramount the safety, health and welfare of identified integral communities" . Inspired by the work of Baillie and Catalano, which demonstrates the gap between ethical rhetoric and the engineering education reality, we argue for a thorough consideration of engineering ethics. From this viewpoint, there is concern that a technically driven curriculum may link ethics with environmental and social justice merely at the descriptive level of tutorial activities for industrial management or professional engineering subjects. With codes of conduct and accreditation processes, professional engineering associations are influential in guiding contemporary higher education direction. For public recognition of graduates and recruitment of students, engineering faculties require acceptance by professional accreditation bodies. The Australian professional association, Engineers Australia offers graduates direct membership in turn for the right to accredit degree programs. Accreditation is based upon Stage 1 Competency Standards that outline general requirements for the graduate Knowledge Base, Engineering Ability and Professional Attributes. The competencies are listed with specific Engineering Ability and Professional Attributes . These competencies acknowledge the importance of educational areas related to engineering and social justice, such as "understanding of social, cultural, global and environmental responsibilities and the need to employ principles of sustainable development" , "understanding of professional and ethical responsibilities and commitment to them" and "general knowledge" . However, a majority of competencies emphasise the demonstrable acquisition of technical knowledge and skills (PE 1.1-1.3, 2.1, 2.3-2.6 ). Despite engineers' social responsibility to "optimise social, environmental and economic outcomes" , the role of the engineer is generally described as technical. In this point, the professional association recognises the need for engineers to play a broader social and environmental role. Yet the emphasis on technical competencies sends a powerful message to engineering educators, highlighting the primacy of technical knowledge in a crowded curriculum. This conundrum is inherent in the report for the Australian Council of Engineering Deans where this educational heritage of engineering is overlaid with a desire for curriculum reform to better serve the needs of society . King praised initiatives in curriculum inclusivity and multi-disciplinarity to advance the understanding of environmental and social justice in engineering. The King report acknowledges the international interest in changing engineering education, which is reflected in the Michigan Millennium project and the Carnegie foundation reports . The Report also calls for curriculum development positioned around problem solving, application and practice to address contemporary social and environmental issues . Unfortunately, models and/or action strategies to advance such calls are absent beyond the dissemination of identified good practice. Duderstadt proposes the way forward for engineering education as increasing the professionalisation of course offerings. Students would take a more generalist under-graduate curriculum prior to engaging in post-graduate engineering experiences. The call for increased professional emphasis has historical antecedents in the Flexner report of 1910Flexner report of ) with its desire to emulate education initiatives in medicine to enhance the standing of engineers in the USA. While such professionalisation adds potential breadth to engineering education, we contend that it does not guarantee a re-structuring of engineering curriculum and pedagogy to profoundly engage with the complex human and global issues of environmental and social justice. Considering the future for engineering education, The Carnegie Foundation report recommends four principles: 1) Provision of a professional "spine;" 2) Teaching key concepts for use and connection; 3) Integration of identity, knowledge and skills through approximations of practice; 4) Placing engineering in the world . We agree that Duderstadt's principles offer the potential for students to be viewed as actively engaged professionals-in-training. However, the realisation of such principles requires engineering educators to substantially renew pedagogy and curriculum to better integrate technical considerations with the skills required of professional practice challenged by contemporary social and environmental contexts. Therefore, we suggest this stance also requires a concept of professionalism to enhance the ethical understandings and motivations of students to go beyond the commercial aspects of engineering practice. --- ENGINEERING AND THE HUMANITIES: LITERACY, LITERACIES AND MULTILITERACIES The shift towards educational reform which considers social context and individual cognition is not unique to engineering. For several decades, literacy debates in the humanities have raged over the teaching of reading and writing to children . Heated discussions about which "basics" to teach in primary and secondary schools are drawn out in public and political arenas . Since the 1990s, researchers have repeatedly called for expanded ways of interpreting literacy to go beyond an industrial model of schooling that ignores students' social contexts . To emphasize that literacy practices vary across sociocultural and historical contexts, researchers have increasingly replaced the term literacy with literacies to signify more than a static set of reading and writing skills . Hence, this paper adopts a definition of literacies as complex practices, which unfold dynamically through social interaction across diverse communities. From this view point, students whose home literacies differ from schools' prescribed language and behavioural routines can be disadvantaged . In engineering education, the concept of literacy has also evolved from a focus on transmitting basic skills about reading and writing to a more student-centred approach, developing effective learning strategies. Since the 1990s, assisting engineering students to develop information literacy skills has emerged as a popular theme, especially for first year cohorts . Some engineering educators have highlighted a more contemporary concept of literacy that includes strands of reading, writing, speaking, listening and viewing. This approach emphasizes the breadth of literacies needed from multidisciplinary fields for engineering students to communicate critically in modern, professional and ethical contexts. More recently, Archer adopted a social-critical concept of literacies to explore how engineering students from disadvantaged backgrounds in South Africa negotiated their identities through projects involving writing about power, housing and water in rural settlements. In this instance, the social-critical approach involved a key question aiming to provide students access to dominant forms of writing and established protocols in the discipline of engineering while simultaneously validating students' literacy practices and resources. Harran defined these contemporary concepts of literacies as extending beyond technology or a transferable set of cognitive skills. Reading, writing, speaking, listening and viewing are viewed as literacy practices evolving in individual, social and political worlds. It has also been argued that today's literacies must draw on digital environments to encompass reading and interpreting media for designing, applying and evaluating new knowledge . Specifically, Brown et al. described multimodal texts as involving a range of literacies allowing students to use, communicate with and critically evaluate information from a wide variety of media forms. To reflect the changing conceptions of leading human lives and engaging with literacy practices in private, public and personal spheres, the New London Group argued for a new pedagogy of literacy. The New London Group coined the expression multiliteracies to propose a pedagogy, which emphasizes learners' meaning-making within the multiplicity of shifting communications and rapidly changing and culturally diverse worlds. This pedagogy draws on social constructivist learning principles, , whereby the student is viewed as an active constructor of knowledge through enquiry and is scaffolded or assisted by more experienced peers and/or the teacher . More particularly, multiliteracies provide a social-critical orientation to learning that encourages debate beyond pedagogy and curriculum to examine social changes, especially related to computer-mediated communication . Borsheim, Merritt and Reed confer that multiliteracies have deepened and broadened the concept of a literate person; a multiliterate person is flexible and strategic and uses literacy practices with diverse texts in socially responsible ways. With group members from linguistics, sociology of education, cultural studies, etc., the New London Group aimed to celebrate a pluralistic society where education would promote social justice . Notwithstanding multiliteracies' international notoriety, Mills argued that the "how to" of the pedagogy has been received with enthusiasm and reservations, highlighting that issues of power and ideology should not be disregarded. This is fundamental as, historically, literacy has often been used as a tool to reconstruct the social order of the status quo. Still, Cole and Pullen contend that multiliteracies have been successful on many fronts, particularly in classrooms where literacy instruction has expanded beyond the dualism of reading and writing printed texts. Multiliteracies have also broadened the representation of language from simple sound-letter correspondence to include audio, visual, gestural, and spatial design. The New London Group acknowledge that translating a theory of pedagogy into practice is not a panache containing miracle recipes. In this vein, the authors stipulate that educational reformists must clearly state their ideological assumptions about learning. To navigate a new literacy landscape and aim for socially just learning opportunities, a pedagogy of multiliteracies assumes that human knowledge is constructed collaboratively in communities across social, cultural and material contexts . From initial social interactions involving diverse skills, backgrounds and perspectives, abstractions can be developed. Therefore, we could argue that the process of becoming literate informs the development of graduate engineers; whilst both processes are technical, they are highly complex and draw on social, cultural and historical factors. In this sense, a pedagogy of multiliteracies is underpinned by four key components, which occur simultaneously and repeatedly in complex ways: Situated practice: • The learner is immersed in literacies situated in or similar to real life worlds. The use and function of situated practice draw on the knowledge of experts or novice experts. --- Overt instruction: A teacher or more experienced learner actively intervenes to scaffold students in a conscious and systematic way through design elements. Critical Framing: • The learner views knowledge through a critical frame and learn to "read between the lines," taking context into consideration. Transformed Practice: • The learner transfers or redesigns an existing practice into new contexts or cultural sites. Acknowledging the importance of immersing all learners, including engineers and future engineers in meaningful experiences, we are inspired by the New London Group whose work called for effective pedagogy to move beyond situated practice to incorporate two critical elements: the conscious control and awareness about parts of a system and the ability to critique a system as it relates to ideology and power. From this perspective, curriculum reform is inexorably linked to contending and related fields of power, which can promote the status quo . Still, for Cole and Pullen , the pedagogical power of multiliteracies is found in its ability to connect young people online via changing social practices. Globally, for example, a large number of youths can now informally learn a plethora of new skills in virtual worlds involving role-play and imaginative scenarios. Moreover, Pilay suggests that embracing multiliteracies is linked to developing socially just learning environments; by celebrating students' diverse literacy practices, educators in higher education can facilitate the dismantling of "white men's clubs" ethos, which empowers the privileged. To this respect, the celebration of diversity, which is characteristic of a multiliteracies' approach, appears to be highly relevant for the discipline of engineering, which has generally been associated with training white privileged males . --- A PEDAGOGY OF MULTILITERACIES: FROM THEORY TO ACTION Since its creation, the New London Group's concept of multiliteracies has evolved across many disciplines, such as geography, history, economics and science . To promote reflection about new literacies in engineering, Exley's science-based research provides a useful example of a pedagogy for multiliteracies in action . Conducted in a Year 5 classroom, the Land Environments Board Game Project aimed to promote scientific literacies with students producing three-dimensional board games about land forms, such as coasts, arid interiors, wetlands and woodlands. The strengths of Exley's project lie in situated practice and overt instruction. Successfully drawing on students' current understandings about land forms, the teacher used situated practice with multi-media texts to engage student interest. An excursion into the real world provided concrete examples from which students could gather data. In school, the teacher exploited board games, often considered popular cultural texts, so that students could examine and produce procedural texts. Overt instruction for student learning was scaffolded as Olivia guided explicit discussions about language. Collaboratively examining features of procedural texts while engaging in inquiry-based learning, students merged technical and conversational language. Thus, the derived sense of literacies, often referred to as being knowledgeable in science, were balanced with a fundamental sense of literacies, defined as understanding science-specific texts . Whilst critical framing was developed as students negotiated assessment criteria and evaluated board games, the critical aspect of scientific literacies could be extended by inviting students to take digital photos of land forms. To move beyond categorization, open-ended questions related to social and environmental justice could be presented. For example: How might weather impact on these land forms? How might humans change these land forms? How might industries change these land forms? Who might profit from these changes? Opportunities for students to research and design questions using print and multi-modal texts could provide extension. While Year 5 students produced board games for their classroom, to extend this transformed practice, the games could be played in diverse settings, such as another classroom, or at home. Discussions could take place about the need to design games in light of purpose and audience. For students whose families use English as an additional language, issues such as translation and cultural interpretation of texts could be broached. Extension activities might include opportunities for students to adapt their games for online settings. Reflection about the collaborative process of learning as well as an interrogation about whose voices are privileged could take place in multi-modal formats. Whilst Exley's study was conducted with primary school students to develop basic scientific literacies, the work is clearly underpinned by a pedagogy of multiliteracies, and could promote alternative perspectives to the traditional science curriculum, which ultimately creates student pathways to higher-level courses associated with engineering . Whilst a pedagogy of multiliteracies has not yet been readily applied to tertiary education, particularly in engineering, innovative research involving teaching with online gaming in an American undergraduate engineering program, illustrates efforts of curriculum reform. Whilst their research is not formally buttressed by a pedagogy of multiliteracies, Coller and Coller and Scott used a video game based on simulated cars to re-design a core course in mechanical engineering. The authors concluded that when compared with students taking the traditional lecture/textbook-based numerical methods course, participants in the re-designed course spent approximately twice as much time outside of class on their studies. In a concept-mapping activity, these same students also demonstrated deeper learning when compared to their counterparts taking the traditional course. The multiliteracies' aspects of this research can be interpreted as situated practice and overt instruction . Based on the constructivist principles of drawing on learners' worlds, the New London Group's situated practice resonates with using new literacies, such as the Internet and other Information and Communication Technologies; such skills are often associated with significant criteria for leading an engaged life the 21st century . For example, video games provide opportunities for learners to be immersed in simulated worlds where they face open-ended challenges similar to professional settings . In addition, Coller's and Coller and Scott's use of video games highlights the multiliteracies aspect of overt instruction in engineering methods, such as solving systems of linear algebraic equations, learning computational techniques and writing computer codes. Aligned with the work of Gee , Coller and Scott argue that video games provide a series of progressively challenging tasks, embedded within design. With copious and immediate feedback built into design, discovery is encouraged in a cyclical process of hypothesis and metacognition. Consequently, scaffolding is embedded in the gaming environment to guide learners through complex problem solving and open-ended tasks. With principles such as growing mastery at graded levels, Gee argued that this environment inspired motivation by signalling the learner's on-going achievements. Therefore, the scaffolding encourages the learner to successfully operate on the outer edge of their intellectual resources . The research of Coller and Coller and Scott demonstrates how a car racing video game can scaffold engineering students to accomplish open-ended tasks and overcome technical challenges resembling those of the professional world. Rigorous engineering problems relating to transmission, tire mechanics, suspension, etc. can be made accessible to students. However, aiming for balance in a multiliteracies framework, the aspects of critical framing and transformed practice could be extended. NIU-Torcs, the video game used by Coller and Scott resonates with commercial video games, such as Need for Speed and Gran Turismo. Generally, these types of exceptionally popular global video games promote competition between players using high levels of speed, dangerous pursuits and sports cars, associated with brands such Lamborghini, Porsche and Ferrari. Therefore, critical discussions could be raised about the social, historical and cultural factors linked to professional car racing, such as corporate sponsorship, exclusive car brands and entertainment, such as female swim suit modelling. In terms of transformed practice, when mentoring high school science students, engineering students could also design experiments to explicate the impact of speed on the seriousness of car accidents. --- MULTILITERACIES AND LIBERATIVE PEDAGOGY: AN EMERGING MODEL FOR ENGINEERING EDUCATION Applying a pedagogy of multiliteracies to the work of Coller and Coller and Scott suggests new literacies to embrace the technical and social aspects of engineering education. However, to deepen the critical and transformative aspects of such engineering literacies, inspiration may be drawn from the framework of liberative pedagogy. The most celebrated writer from this movement, Freire urged educators to reflect on the dynamic relationship between teaching, learning and the construction of knowledge. An awareness of these dialectics leads to conscientização, a process of learning about sociopolitical contradictions and taking stances against oppression. Freire distinguished between traditional and alternative approaches to education. Banking education was viewed as the hierarchical transmission of knowledge from teacher to learner. Problem-posing education in contrast was defined as learning within a given learning community, which was negotiated through social interaction and dialogue. For Freire , the task of the teacher is to critique bureaucratization; teaching requires rigorous scientific, emotional and affective preparation. Rather than follow prescriptive methods, teachers, who are also learners, must guide their students through engagement and liberation via a range of literacy activities. Recently, numerous scholars in engineering education have reiterated the call for a shift towards liberative pedagogy, which includes social justice and the elimination of oppression based on race, gender and class . To enable engineering students to develop the ability to critique from a social justice perspective, this shift would need to counter many taken for granted assumptions in the profession, such as Westernstyle industrialization leads to positive outcomes for all communities of the world . Riley argues that praxis is a key concept for linking engineering and social justice. Praxis is regarded as a form of reflective action in which theory and practice are integrated; one does not lead the other and both are developed simultaneously, through each other . Referring to Freire's concept of reflective action, Riley suggests that praxis tells us much about the emergence of an engineering problem, the expertise of engineers and extent of community engagement. Praxis must also be considered in the context of ethics for engineers. Whilst Riley refers to doing the "right" thing in response to an engineering problem, Catalano offers ethical questions such as: "Who is included in engineering discussions?" "Which groups are missing from the discussion table?" "What are the global consequences?" "What are the ethical considerations?" As engineers build relationships through reflective action, communication and a commitment to communities, ethics merge with praxis . In this cyclical process, an engineer committed to social justice can therefore be viewed as reflective and critical. Drawing on recent work incorporating liberative pedagogy in engineering education and a pedagogy of multiliteracies , Figure 1 below presents an emerging model for educating reflective and critical engineers. The peripheral circle of the diagram represents the holistic nature of the model, emphasizing the broad aim of social justice for engineering education. The second concentric circle represents multimodal texts, illuminating the flexibility of learners to use a range of texts, with multiple modes of representation, including digital and technological. The third concentric circle emphasizes the range of new literacy strands, such as spatial, audio, gestural, linguistic and visual, which can be used in the physical design of engineering objects or systems. Interrelated and overlapping within the next concentric circle are the multiliteracies aspects of: situated practice, overt instruction, critical framing and transformed practice. These aspects link problem-solving to constructivist theory, through processes such as scaffolding, whereby a more experienced learner guides a less accomplished learner, or a teacher draws on life-like experiences to design learner-centred tasks. Critical framing allows the learner to ask questions such as: Whose point of view is being privileged? Whose point of view is missing? Transformed practices afford learners the opportunities to adapt their products or designs to alternative settings or audiences. At the heart of the diagram sits praxis, representing fusion between theory, reflection and action. Aiming for social justice, Freire argued that praxis involves dialogue between students and teachers and leads to concerted action for assisting local populations. Integrating procedural and distributive justice and a shift away from scientific positivism, the multiliteracies and liberative pedagogies model can tentatively be applied as an example to the curriculum of mining engineering. Generally, it can be argued that undergraduate mining engineering assignments accentuate the mathematical and scientific reasoning associated with the techniques of mine design and planning. For example, students are expected to conceptualize the design of stopes, which involve large-scale excavations to remove the ore . Students are also expected to calculate the projected tonnes to be mined from the orebody. Mining methods are justified in terms of productivity, safety and cost-effectiveness. If the emerging model were integrated into such technical assignments, critical practice could assist to contextualize mine design from the beginning of the mine's life to consider social issues related to exploration. The voices of multiple stakeholders could be analysed via multimodal texts and open-ended questions such as: Who owns the prospected land? Depending on location , how is ownership determined? Situated practice could be incorporated via incursions or excursions whereby guest speakers introduce case studies highlighting leading practice used to manage all stages of a mine's life, including sustainable development and mine rehabilitation . Overt instruction could be embedded in multimodal tasks to better understand the range of professional engineering texts required for diverse audiences and purposes. With computer-based simulated programs, various stages of a mine's life could be described, with closed and open-ended tasks used to promote students' mathematical, scientific, creative and critical reasoning. Finally, to promote reflective practice about their learning, students could be scaffolded regularly via online blogs, iPad note taking and concept maps . A further example pertaining to civil engineering is given to illustrate how the emerging multiliteracies and liberative pedagogies model could be applied. In traffic engineering, for instance, as modern roundabouts appear with increasing frequency in North America, complex social and technical challenges unfold. Differentiated from rotaries or non-conforming traffic circles, which have been used over many years, a modern roundabout is defined as a circular road junction with a central island around which traffic moves in one direction . The modern roundabout conforms to two basic design principles: yield at entry, giving right of way to vehicles in the circulatory roadway and deflection of entering traffic, due to the absence of nontangential entries and a central island, forcing lower entry speeds . Originally developed in the UK, modern roundabouts have often been recognized as providing potential benefits such as increased traffic flow and fewer crashes compared to signalled intersections . On the other hand, concerns have been raised that roundabouts are not accessible to low-vision and blind pedestrians and multi-lane roundabouts may actually increase safety risks for cyclists and pedestrians, especially those who may be vulnerable, such as the disabled . Increasingly, traffic engineers are confronted with competing opinions from those responsible for addressing communities' traffic flow and stakeholders who are protected under the Americans with Disabilities Act . By applying the multiliteracies and liberative pedagogies model to traffic engineering studies, social aspects of selecting roundabouts could be integrated with technical design through situated practice. This process might involve engineering students liaising with shire councils to determine local policies, traffic and environmental regulations and community consultation. To complement technical aspects of design such as geometry, vehicle capacity and space feasibility, engineering students could use mobile technologies to interview targeted community members about perceptions and experiences of using roundabouts. Scaffolded by postgraduate students and/or academics in the humanities, engineering students could be introduced to discourse analysis to explore issues of power in interview transcripts; such discussions could lead to overt instruction about language and heightened awareness of effectively communicating to audiences. Drawing on interactive resources from private industry , students could develop educational resources incorporating audio, spatial, linguistic and visual design with virtual reality simulation for drivers and pedestrians navigating roundabouts. As transformed practice, these resources could be modelled in local educational institutions to scaffold secondary or primary school students. Engineering lecturers could also promote online learning techniques such as wikis, aiming for collaborative reflection . Building on praxis, these techniques could be incorporated into assessment to develop critical awareness about ethical dilemmas which engineers face and how these situations might compare to those of other professionals. --- CONCLUDING REMARKS Aiming to contribute to the discussion of developing professional engineers who possess highly proficient technical skills and seek social justice through reflective practice, this paper proposes new literacies for engineering students. Inspired by humanities-based theories and empirical studies in science education and engineering, the paper proposes new literacies for social justice and engineering. These literacies are represented by an emerging model which integrates recent work surrounding liberative pedagogy in engineering education and a pedagogy of multiliteracies . Far from offering an all-encompassing solution to meet the curricular and pedagogical requirements of undergraduate engineering programs and professional accreditation standards, this model tentatively explores a holistic approach integrating elements of design, multimodal texts, authentic learning and critical thinking. The nucleus of the model lies in Freire's notion of praxis, viewed as an iterative process in which engineering students seek harmony between design, technical problem-solving, and commitment to humanity. Like Freire , advocating for social justice in engineering education will always be fraught with tensions, dilemmas and struggles of power. But these efforts for a fairer and more just world must be sustained through innovation borne of reflective practice.
This paper argues for the need to develop engineering students with high levels of technical competency as well as critical awareness for the realities of working and living ethically in the global community. Drawing on social constructivist principles of learning (Vygotsky, 1978) and a pedagogy of multiliteracies (New London Group, 1996, 2000), the paper explores new approaches for engineering education to meet the challenges embedded in current undergraduate programs and professional accreditation standards. To improve the ability of engineers to contribute to social and environmental justice, there needs to be a rethinking of engineering curriculum and pedagogy to develop engineering literacies that encompass a social and technical focus.
INTRODUCTION Universities and colleges aim to develop and nurture students. Their main functions are to provide quality education and significantly contribute to society. The first function is obvious; they should educate students and prepare them to play a part in their respective organizations. The second function of universities is to make meaningful contributions to society by creating new knowledge . That is a key and vital part of any research-led intensive university. In the Philippines, the Commission on Higher Education has emphasized the importance of research in higher educational institutions. Research had been a big part of the criteria to become a center of excellence and a center for development. Research has also become one of the significant standards in accreditations and certifications. CHED encourages faculty members to produce substantialquality studies and innovative papers. Studies suggest that institutional status and output contribute to benchmarking any institution's research proliferation . Shamai and Kifir assert that for a university to be worthy of its name, it must spread research and research culture, which upholds its "formal and substantive right to be the gatekeeper." Growth in research publication has become a guarantee for stature and a significant institutional ranking . Research production and outputs are used to promote faculty members and will lift the scale and reputation of universities. The increase in reputation and world rankings of a university will increase student enrollment and more generous grants from government agencies and private sectors. --- LITERATURE REVIEW --- Research Culture The kind of environment that spearheads research productivity among university teachers has been the focus of studies about research culture. There are 12 identified factors present in excellent research environments . These are clear goals for the coordinator, research emphasis, distinctive culture, positive group climate, decentralized organization, participative governance, frequent communication, resources , group age, size and diversity, appropriate rewards, recruitment emphasis, and leadership with both research skill and management practice In a study by Clemena et al. , faculty members of one higher education institution did not believe that aspects of research culture such as the impact of research, inter-institutional collaboration, institutional research strategy, infrastructure, the presence of ethical policies, and the availability of external and internal research funding. Findings suggest that nurturing a research culture should be taken earnestly with the help of HEIs, the researchers' minds, and the institutional policy body. On the other hand, in a study by Iqbal et al. , research culture can be credited to the values and ideas researchers use to process research-related problems. It was also found that institutional and personal factors were seen as relatively more influential in advancing research culture than environmental factors. An example of institutional factors is communication systems. In Lodhi's study, the faculty members believed that their University's top management could not spread information in time about future training and research opportunities because of the slow communication system. They said that they received the news after the deadline. Also, Lodhi found that schools' existing structure was more supportive of teaching than research activities. The same study shows that most faculty members spend time teaching rather than researching. The study also showed that almost all teachers expressed their non-knowledge of qualitative research. Alarmingly, the majority of them claim not to update their analyses . A study by Mendez found that the determinants of the University's research structure are generally inclined to the faculty member's interest. It is also critical to say that it was observed that the University gives more favor to quantity rather than quality. The Administration's way of introducing the new culture of research in the University is to impose a publication quota. Most professors unfamiliar with the research culture and were traditionally focused on teaching are not interested in nor have research skills. Scott drew from the diverse fieldwork experiences of three non-Vietnamese doctoral students in rural and urban settings, with communities and central, provincial, and local government agencies. From their research sites in local villages, enterprises, offices, and archives in regional centers and cities, they emphasize many aspects of the changing academic cultures in the context of the broader reshaping of economic and political relationships in Vietnam. Opportunities for foreign scholars to collaborate with Vietnamese researchers on participatory research are constrained by institutional, epistemological, and professional barriers to adopting new practices and perspectives. Utilizing multiple techniques to identify this disparity is thus extremely beneficial. Fourth. Similarly, using official channels to gain access to communities and information in Vietnam is invaluable and, in many instances, unavoidable. In a study conducted by Singh , it was concluded that research culture is significant for schools or universities specifically established for research activity, as well as in all educational settings. To establish a superior and high-quality education, research must be conducted. In China, many researchers spend too much time cultivating relationships and should devote more time to attending conferences, researching science, conducting research, or instructing students . To be noticed in their organizations, others must be more easygoing. Some become part of the problem by evaluating grant applicants using associations and undervaluing research validity . According to Shamai and Kifir , in order for a higher education institution to be considered worthy of its name, it must propagate research and research culture that maintains its "formal and substantive right to be the gatekeeper." The term "culture" in the research context refers to the behaviors that professors and other academic staff members are expected to exhibit to integrate successfully and live up to the standards of the academic community. Constructive culture, passive and defensive culture, and aggressive and defensive cultures were the three ways in which participants in this research study regarded the research culture. The norms for accomplishment, self-actualizing, humanistic-encouraging, and affiliative behaviors that are characteristic of constructive cultures urge members to engage with others and approach activities in ways that would help them realize their higher-order satisfaction demands. The presence of these norms defines constructive cultures. Cultures classified as passive or defensive tend to be characterized by approbation norms, conventional norms, dependent norms, and avoidance norms. These norms encourage or implicitly compel individuals of the culture to engage with others in ways that will not compromise their personal safety. Oppositional, poweroriented, competitive, and perfectionist norms characterize cultures characterized as aggressive and defensive. These norms encourage or motivate individuals of these cultures to approach activities forcefully to maintain their status and sense of safety. Abramo et al. pointed out the critical factors that should be considered in determining Research Productivity, such as impact, the intensity of the field of science , citations, and the number of co-authors. The research cited and discussed different, widely-used indicators, such as the new crown indicator, CWTS method, SCImago Institutions, The Normalized Impact, and more. It also further explains that most of the widely-used indicators present two limits: the lack of the normalization of the output value to the input value and the scientists' classification concerning their field of research . The researchers recommend the closest measure of productivity, the FSS, which considers both the quality and quantity of production. The researchers also call out the institutions and scholars on the subject to focus their knowledge on developing the FSS indicator to be more fitted in Research Productivity rather than microeconomics and refrain from using invalid arrows-no matter how widely used. On the other hand, Ndege suggested that research productivity is influenced by three pertinent factors: personal, institutional, and standard human capital factors. The researchers claim that investment in this factor would significantly affect the country's Research Productivity Levels. In a study by Hadjinicola , he found that external funding results in more high-quality research. The research must be perceived as relevant and significant to get research grants from other organizations. External funding also pressures researchers to provide a deliverable that justifies the initial budget. This pressure leads to more and better-quality publications. On the other hand, it was found out that in India, both public and private schools were the same in research productivity with journal tier, total citations, impact factor, author h-index, number of papers and journal h-index as the main factors for research productivity . Also, the same study found that faculty members who had doctoral degrees from foreign schools were more productive. --- Research Productivity --- Research Attitude A research attitude is a disposition toward conducting investigation. Social support consists of assistance provided by the government in the form of policies and by institutions. Numerous investigations on attitudes toward research have revealed that these attitudes are frequently negative. According to the findings of Safi's research, people's attitudes enable them to discover solutions to new problems and transform reality based on their questioning, skills, knowledge, and abilities. The first and most fundamental contribution to the success of modern education will be the instructors' knowledge and attitude toward research and investigation for innovative professional performance. Research attitude is a unique trait of educators, who, more than other professionals, support and develop the teaching professions and place them on the map; as a result, research is an integral component of the teaching profession. In a study by Babalis et al. , both men and women asserted that innovative-creative thinking was an essential trait to cultivate. On the other hand, there were substantial gender differences in research attitudes. Both men and women exhibited a positive attitude toward research, but men preferred to be examined by individualized research works while women favored corporate research works. In addition, there are significant disparities between the sexes in terms of their research attitude and the type of work that makes them feel happy and which they choose. Women displayed a more "traditional" approach, as they preferred tasks with clear instructions, simple goals, and planned assignments to reduce the risk of error. Men, on the other hand, demonstrated a preference for non-integrated research works by selecting works at a higher level where they can make personal decisions, compared to women. During the course, the faculty's general and notably their specific, cognitive, affective, and social understandings of research work and functioning as a researcher were expanded. The conceptions were expanded from ethical principles to conceptions in which ethics served as a foundation for reasoning and acting in research and daily life. According to the findings of Jeronen , faculty and students in distance education, in particular, may require more specialized scaffolding than those in contact education when endeavoring to locate pertinent information in complex, openended situations. Questions and supportive feedback aid students in the formation of their ideas. Teachers should not provide correct answers; students should be permitted to make decisions or revise their beliefs based on their own research and observations. Students can externalize their thinking for peer critique, discussion, and revision via distance-learning platforms. --- Research Anxiety Research anxiety, defined as the feeling of dread or apprehension associated with conducting research, is an additional aspect of research that may influence students' persistence in their research experiences and in science in general. Spielberger defines anxiety as a negative emotion characterized by subjective feelings of tension, anxiety, and concern. State anxiety is defined as a reaction to a specific condition or stimulus , whereas trait anxiety is a generally persistent aspect of a person's personality. State anxiety can be treated by altering the trigger that causes the transient state of anxiety . While trait anxiety is addressed consistently and frequently through counseling and medical treatment, state anxiety can be treated by altering the trigger that causes the temporary state of anxiety. We define research anxiety as the state of anxiety that arises when a student engages in authentic research in a professor's lab. As with math anxiety, statistics anxiety, and library anxiety, research anxiety is a reaction to a specific situation: conducting research. Anxiety in response to active learning and interaction with classmates . Each of these forms of state anxiety has been shown to have a negative impact on classroom performance . Nonetheless, research anxiety has not been studied in the sciences, especially in the context of undergraduate research. Only in research methods courses has research anxiety been examined . --- RESEARCH METHOD --- --- Research Instrument The research instrument utilized in the study was a questionnaire. There are four parts of the questionnaire. The first part is the demographic profile, the second part will be the research skills, and the third will be the research attitudes and anxiety. The questionnaire was adopted in research by Prof. --- Data Analysis The researchers used descriptive statistics in narrating the results of the study. Mostly the items were presented using frequency and percentages. On the other hand, the researcher attempted to measure the relationship between research culture to research attitudes, anxiety and skills using simple structural equation modeling. Lastly, the researchers interviewed faculty members to validate the responses in the survey. --- Ethical Considerations The research among faculty members of NWU incorporated a variety of ethical factors. The participants get comprehensive information on the research's objective and their unique contribution. After presenting and discussing the goal of the research inquiry, informed consent was sought. Similarly, the researchers invited people to engage in the study, and they could withdraw at any point throughout the examination. All responders' queries were answered thoroughly and honestly. Additionally, everything is stated and adhered to by the researchers throughout the research project. The researcher made every effort to guarantee that the respondents receive only the best, that they profit from the study's findings, that they contribute to the development of the teaching and learning process, and that they are never physically, psychologically, or emotionally injured. Additionally, informed consent included an agreement between the researcher and the participants in which the latter consented to their participation in the study. Sufficient information presented and explained to participants at their level of comprehension, information from which participants can withdraw at any time, ask questions, and refuse to answer questions if they are uncomfortable with the questions, an explanation of the study's potential risks and benefits to enable participants to make informed decisions about their participation in the study, and a description of the participants. Prior to the commencement of data collection, a signed consent form was collected. --- FINDINGS AND DISCUSSION Table 1. The HEI had already focused on research through revised ranking and promotion where the research component had been given maximum points, monthly webinar Series of their University Center for Research capability and capacity enhancement, and improved incentives for research that are enticing and motivating for faculty members. Many studies have investigated attitudes about research, revealing that views toward research are often negative. However, in this study, it is positive. The interviews with the faculty members revealed that there had been a strong push and motivation for research for the past years, so they have yet to choose whether to accept research as part of their task in the academe. Interviews with the faculty members revealed that they perceived research as a complicated subject because of the process of identifying research topics and titles and that they had to undergo many activities before completion. The mere conceptualization of research gaps, the first step in the process, is difficult, so many faculty members need help to begin a research activity. During the data collection procedure, academics confront difficulties. They claimed that difficulties in reaching various sampling groups, the indifference of the sample group, which included instructors, toward completing questionnaires and protocols, and their reluctance to participate in studies voluntarily negatively impacted their research procedures. In addition, they claim that their studies are negatively impacted by their incapacity to find assistance during the application process for questionnaires in the field of education. Overall, faculty members find research a difficult, stressful and complex activity. They find research as an activity that causes them to be Moderately Anxious . and oral communication skills . Research skills are the capacity to search for, identify, extract, organize, assess, and utilize or present information related to a certain issue. Academic research is a subset of research that entails a careful and rigorous inquiry into a certain field of study. It entails extensive searching, study, and critical thinking, often responding to a particular research topic or idea. It also frequently entails a significant amount of reading. Interviews with the faculty members revealed that many are fond of reading books. However, they wanted to enhance their organizing skills, writing skills and methodological knowledge. The Research, Community and Social Development of the HEI organizes every month webinar with different research topics to ensure that research skills and capabilities are enhanced. The research culture of the University is discussed in Table 4. It can be deduced that the prevalent research culture in the HEI is constructive . Constructive cultures' standards for success, self-actualization, humanistic-encouraging, and affiliative behaviors encourage individuals to interact with people and approach tasks in ways that would help them meet their higher-order satisfaction needs. The existence of these rules characterizes constructive cultures. In the University, faculty members help and encourage each other because they want to increase research production. They collaborate, and even the different colleges develop collaborative research to increase productivity. Administrators also see that the environment is encouraging by providing incentives and recognition. Table 5 reveals the research productivity of the HEI. It should be noted, however, that some faculty members are very active in research, and some are not. Using the formula of multi-factor productivity, which is output/input, the output being production, presentation and publication, and inputs are years of observation and the number of full-time faculty members of the Institution. The research productivity computed is 1.36 research per year per faculty. Since no industry standard for research productivity exists, it is not easy to establish whether the HEI is productive. However, with the PACuCOA standard of 2 researchers for five years per faculty as a benchmark, the University still needs to be considered unproductive. Also, considering that faculty members are very active in research and some are not, the HEI being investigated has much to improve. Computing the percentage of faculty members active in research can be seen at 28.40%, which is way beyond the International Standard Association of 60%. PLS-SEM was used to investigate the relationship between skills, anxiety, and attitude toward research culture. The PLS-SEM route model is evaluated in two stages . The first portion investigates a measuring model. This phase assesses the validity and reliability of the variables. The structural model is evaluated in the second stage by examining the alleged relationships between variables . Table 6 shows the structural equation model's model fit coefficients and quality metrics. According to the overall findings, the SEM estimations are within the permitted range. The p-values of the average path coefficient , average R-squared , average block VIF , and average full collinearity VIF indices must be less than 3.30 for the model to be considered acceptable. . Tenenhaus goodness of fit , a measure of the model's explanatory capacity , is classified as small if it exceeds 0.1, medium if it exceeds 0.25, and large if it exceeds 0.36. . Tenenhaus, Vinzi, Chatelin, and Lauro calculated the GoF using the mean commonality index and the square root of the ARS. Table 6 shows the fit and quality indicators for the model. fall within permissible limits. The measurement model was evaluated using convergent and discriminant tests of reliability and validity. The evaluation of construct dependability permits a comparison of a reflective item or collection of reflective items to the being evaluated construct . Frequently, composite reliability and Cronbach's alpha are utilized to evaluate dependability . To indicate dependability, the composite reliability and Cronbach's alpha scores must be at least 0.70. Nunnally and Bernstein . Table 7 indicates that the construct dependability criteria were met by the factors of skills, anxiety, attitudes, and culture. On the other hand, convergent validity evaluates the quality of the questions or question statements on a research instrument. This demonstrates that participants understand the objectives or question statements of the constructs as intended by their developers . The p-values for each item must be less than or equal to 0.05, and the loadings must be greater than or equal to 0.5 for convergent validity. . The connection between an item and a structure is referred to as item loading . All item loadings are statistically significant and exceed the 0.5 threshold, as shown in Table 7. In addition, the average variance extracted quantifies the variance of each construct recovered from its constituents in relation to the measurement error variance . Each AVE for latent variables exceeds the specified validity threshold of 0.5. Fornell and Larcker published their findings in 1981. Calculations of accurate AVE coefficients were made. Table 8 shows the correlations between variables using the square roots of AVE coefficients to assess the instrument's discriminant validity. Regarding discriminant validity, it is determined whether or not questionnaire respondents can comprehend the statements associated with each latent variable. Furthermore, it ensures that statements about a single variable, for example, do not contradict statements about other variables . Fornell and Larcker said that the square root of each variable's AVE must be larger than the square root of any correlation between variables. The study's measures exhibit discriminant validity based on the findings. Table 9 shows the model for a multiple relationship test. Skills and research culture have significant associations . However, there are no significant relationships between anxiety and culture or attitude and culture . --- Research Attitudes of Faculty Members --- Indicators of Model Fit and Quality The parameter estimates for the relationship model are shown in Table 9. According to the data analysis, research skills influence culture . The positive route coefficient denotes how a researcher's abilities contribute to the research culture. Cohen's f2 = 0.337 indicates that the route from abilities to culture has a modest impact size. The conclusion gives support to H1. The study anxiety did not significantly influence culture . Finally, the negative route coefficient suggests that research anxiety does not affect research culture. As a result, H2 is not supported. Attitudes toward research did not influence culture . As a result, H3 is unsupported. --- CONCLUSIONS In today's society, universities perform three functions: training education, scientific research, and public service. Within the context of the historical process, three fundamental shifts may be identified: the shift from a training and education orientation in higher education to a scientific research orientation and the shift from a research orientation to a training-education orientation. This study aimed to find out several things. First is what are the attitudes of faculty members in terms of research. The results found that faculty members in the Institution have positive research behavior since it is a requirement for ranking and promotions. Research is an inevitable task in the academe, and faculty members must conduct research to be promoted and ranked higher. Research is a major component of the University, and according to Commission on Higher, it is one of the pillars of Higher Education Institutions. This is in contrast with the study of Safi where it was found out that faculty members and students have negative attitude towards research. The faculty members of the institution under investigation was positive because as mentioned research culture is being cultivated by them. Moreover, faculty members find research as complex and difficult. Professors face challenges throughout the data-collecting procedure. They asserted that several challenges, such as accessing diverse sampling groups, the sample group's indifference to the questionnaires and procedures, and their unwillingness to participate in the studies, adversely influenced their research operations. Furthermore, they report that their inability to locate someone to assist them throughout the application procedure for surveys on the subject of education has a detrimental impact on their studies. Mellon had the same results. It was emphasized that many faculty members are anxious in doing research activities. The research culture in the University is constructive. It is characterized by encouraging and supportive culture. The research culture encourages collaboration and partnership, which is very positive. Faculty members are motivated to do research activities since Administration provides a non-aggressive environment. However, as of 2023, the research productivity of the HEI could be higher, with 1.36 research/faculty/year. Since the Commission sets no industry standard on Higher Education, there is no formal way of identifying whether the University is productive. However, with the percentage of active faculty in research, it can be concluded that the HEI still needs to improve its productivity. As for the SEM results, skills largely affect research culture. It means skilled faculty members are more likely to develop a research culture and be more productive. --- Managerial Implications This study can be useful to administrators of different HEIs in order for them to strategize and come up with solutions to research anxiety and low productivity. The study suggested research skills as directly affecting research culture. Therefore, the Administrators can develop appropriate webinars, training and workshops to enhance skills to improve the research culture and productivity. --- Limitations and Future Research Directions This research has its limitations. The scope of this research is only one higher education institution. Also, the years being investigated are limited to 2019-2023 only. For future research, it is highly recommended that other institutions be included in the study. Moreover, the research can be extended to the university's different colleges. Moreover, research on research problems and challenges can also be conceptualized to find out how proper interventions will be adopted.
This study is about research attitudes, anxiety, skills and culture of private higher education institutions. Universities and colleges seek to cultivate and develop students. Their primary responsibilities are to provide quality education and substantially contribute to society. Moreover, the university should have a strong research culture to attain these goals. The study participants are full-time faculty members of the institution, and descriptive statistics and partial least squares are used to analyze data. The HEI under investigation showed a positive attitude towards research because the institution encourages its faculty members to do research activities. They also have moderate competence skills and are anxious about research activities. On the other hand, the dominant research culture is constructive, and a supportive research environment characterizes it. According to the results of PLS-SEM, research skills are significantly related to research culture. Thus, the university should continuously upgrade its research skills to develop its research culture.
I. INTRODUCTION In 2018-2019, the authors undertook a study examining church bell ringing in the state of New South Wales, Australia, with research questions investigating the extent of church bell ringing still practiced, what factors may determine this differentiation, and what values and significances were attributed to the bell ringing sounds by the practitioners themselves. While the full data is reported elsewhere , we found that a high proportion of Anglican, Roman Catholic, and Orthodox churches retained bells on church premises, especially in churches of a historic period. For these churches that had bells, a large proportion of these denominations actually rang them, and there was subsequently a high level of perceived value placed on bell ringing, especially because it was considered a form of heritage. We then continued this research interest, using the COVID-19 pandemic as a case example of how a stochastic event can change one aspect of the sound world. In August 2020, the authors investigated the effects of COVID-19 on a sub-set of the initial cohort, with specific interests in how bell ringing sounds had changed over six months in NSW . This time frame included three periods of interest: previous to the first COVID-19 lockdown of April 2020, during this lockdown, and the subsequent post-lockdown period. We found that bells were largely silenced due to COVID-19, with ceased bell types including angelus bells, tolling bells, and pealing bells; angelus and tolling bells being a single individual bell, and pealing bells being a set of many bells rung by a group of bellringers. Whilst some churches "snapped back" to pre-lockdown patterns of bell ringing, some churches did not return to these levels, and interestingly, some churches increased the capacity of bell ringing over the lockdown period. One year after the onset of COVID-19, noticing that Australia had multiple periods of lockdowns to varying extents and recognizing the potential issues of sound change in an urban setting, we chose to undertake a followup study to fully investigate patterns of church bell ringing change on both a larger scale and time frame. --- II. DATA SOURCES/METHODS While the previous study looked at both single bells and tower bells, we decided to follow up with a study in 2021 solely pertaining to tower bells, due to readily available data sources. The Australian and New Zealand Association of Bellringers is a long-running community entity promoting the art of change ringing, which maintains a complete register of towers containing ringing bells in these two countries. With the onset of the COVID-19 pandemic, they promptly set up a section of their website discussing which towers would be open and which towers would be closed due to differing governmental-imposed restrictions in the listed towers. To confirm data quality, individual contact with tower captains or other relative personnel was then made in cases of suspected errors on the website, as was the use of other public data, such as web pages and social media accounts. Data were collected on average bi-monthly, with follow up and verification via direct communication at the time of manuscript development. This information produced an accurate record showing when each individual tower opened up, over the period March 2020 to April 2021. We limited our study to the two most populous states in Australia-New South Wales and Victoria-with NSW having 16 towers in the capital city of Sydney and another 16 in regional towns, and Victoria having five towers in the capital city of Melbourne and another five towers in regional areas of this state. --- III. RESULTS Over the entire annual period from late March 2020 to April 2021, we found that tower bell ringing in both NSW and Victorian churches was highly correlated with governmental-imposed restrictions which stipulated actions allowable by the community. Previously to any COVID impacts or restrictions, all of the bell towers in these two states listed by ANZAB were open and ringing as per normal. The results for NSW and Victoria had different patterns, which can be attributed to the different restrictions imposed in either state. In NSW, the first case of community transmission of COVID was on March 2, 2020, with the state subsequently having a total lockdown period of about six weeks . During that time, no pealing bells were rung, and ringing was limited to single angelus/tolling bells at the decree of the diocese or parish . After the initial lockdown, social limitations changed-it was permitted to have ten people in a religious setting or space in late May , and subsequently, 50 people in June . In Sydney over that period of time, there was an increase in church belltower ringing through June and July. From July through February the cap on restrictions was lifted from 50 to 100 people, and there was reportedly another correspondingly large increase in church bell tower ringing over that period of time, to around 60% of pre-COVID levels. Restrictions were again subsequently lifted further to a capacity limit of one person per 4 m 2 , and finally, to one person per 2 m 2 in April 2021. There was a much larger increase in church bell tower ringing over this time in the churches of Sydney, with levels of bell ringing approaching 100% and hitting the full capacity of pre-COVID levels by the week of April 14, 2021. Regional churches showed a similar pattern of decline and return to bell ringing, although the lag times were greater than in metropolitan areas: the return to 30% capacity did not eventuate until August, and the final full capacity did not eventuate until April 21, 2021 . Churches in Victoria showed a different decline and return pattern from those in NSW, which reflected restrictions pertinent to this individual state. It must be noted that the Victorian dataset is more limited. Having been founded 70 y after NSW, it is quite clearly reflected in the number of bell towers in the state; only five towers in Melbourne and five towers in regional areas, compared to the 16 each in NSW. The Victorian shutdown began largely in the same way to NSW, with the first community transmission on March 7, followed by an initial gradual lockdown, and then a snap nationwide lockdown for six weeks . Similar to NSW, there was a gradual return of capacity in churches to ten and then 20 people , but unfortunately, community transmission reoccurred in Victoria and the state went again into a total lockdown until late October, with restrictions gradually lifting after that . During this entire period from April to mid-November 2020, there was again no tower bell ringing in this state. After November, a gradual easing of restrictions occurred, permitting 150 people in a congregation, followed by a one person per 2 m 2 capacity rule in January 2021. There was an immediate positive response in bell ringing in Melbourne with a return to 60% of pre-COVID levels. In early February 2021, a case of community transmission caused a one-week snap lockdown across the state, and unlike the previous lockdowns of a gradual reduction of restrictions, this lockdown was sharp enough to allow a rapid bounce back to pre-levels, both in capacity limits and the prevalence of tower bell ringing. Full return of bell ringing to pre-COVID levels occurred by early April 2021 in Melbourne. Churches in regional Victoria showed a similar lag pattern, with only 20% returning to ringing activities in the Christmas 2020 period, and only to 80% by the Easter period 2021 . --- IV. DISCUSSION These delays in re-opening towers for bell ringing were not as expected at the commencement of the study. The driving force behind these delays across both regions of NSW and Victoria was the limitations imposed by the legally mandated "social distancing" between people in public spaces. Non-residential internal spaces were restricted to a maximum occupation density of one person per 4 m 2 with further stipulation that persons other than members of the same household had to socially distance at 1.5 m apart . In a standard ringing room containing eight ropes, the spacing between the bell ringers is fairly close in a normal operation procedure, with bell ropes placed around 3 ft apart . Such close distance between bellringers, however, is not permitted during tight restrictions. In order to allow eight people the distance of 1.5 m apart , the bellringers would essentially have their backs against the walls. While that may be potentially possible, the setup and length of the ropes and the rhythmic motion required for the bell ringing activity do not make this a viable option in most cases. Whilst some towers, such as St Mary's Cathedral, Sydney, were able to resume soon after the first lockdown , this was primarily due to the large amount of space in the ringing room, alongside strict protocols in place of distancing, and hand hygiene . Other church towers would have smaller spaces to contend with. As such, any visible lag from the dataset is largely a result of the various bell ringing communities working out options for making the tower work, given the space and safety constraints they had during this period of time. During the return to regular bell ringing, we discovered some really interesting adaptive responses being employed in some churches, both for bell training and in public bell ringing. In order to continue some form of training for the bellringing team, some tower groups advocated for the use of "virtual ringing rooms" during the height of the pandemic, using web-based applications where the actual ringing action is undertaken by pressing a key to ring an individual bell. It was noted that while it kept mathematical functions working cognitively, it offered none of the actual physicality of ringing . One church, St James' , Sydney, allowed a small number of the regular band of ringers to return to the tower for weekly practice using a simulator, as social constraints could allow the two people needed in the bell chamber for this activity-one to ring and another to operate the simulator . Other churches applied adaptive measures when actually ringing tower bells, in an attempt to return to the activity level pre-COVID, while still complying with restrictions. For example, at St Paul's Anglican Church, Burwood , eight bells were rung in the period pre-COVID, which was the total maximum, as it is an eight-bell tower . During the height of the lockdown, no bells were calling except for one single tenor bell which tolled on special occasions and for outdoor church services on March 22, 2020 and March 29, 2020, and again from May 17 . Once the restrictions lifted somewhat, the church returned to half capacity from June 7 for much of the remainder of 2020, then increased to six bells during the Christmas period, decreased back down to four with higher restrictions, increased to six on February 14, 2021 and finally, to eight bells on April 12, 2021 . Despite having the capacity for eight bells in the tower, the church was not able to ring at this capacity for most of the year due to the imposed restrictions. Similar measures were undertaken by ringers at St. Jude's Randwick, Sydney. After months of no ringing, activity was resumed and restricted to four bells with appropriate social distancing. By early 2021, it was common to have six, seven, or all eight bells ringing on Sunday morning services at this church . This presents an interesting case for discussion: the ringing of tower bells during the COVID-19 pandemic is therefore not so much a question of the presence or absence of bells, but more a question of the richness of the sound that is able to be rung due to the number of available bells as a result of space restrictions. The reason follows thus: Ringing a certain number of bells allows specific repertoire to be performed; eight bells allows the performance of Basingstoke Surprise Major, and six bells allow Grandsire Doubles. However, limiting the number of bells actually restricts the repertoire available to be performed, and instead of scalic patterns, the church has to offer other suggestions, such as triadic patterns or similar. These restrictions, therefore, do not only create limitations on the number of churches ringing bells, but a change in the manner of richness in the soundscape of the surrounding area. Further adaptive measures allowed some churches to conquer this, such as the ringing of non-adjacent bells-there were cases of churches ringing Basingstoke with four bellringers, with bellringers having two bells each; and other towers utilized family groups for ringing purposes as the restrictions in this situation were not as strict. We discovered similar instances of adaption. For example, Plain Bob was documented being performed having only three bellringers using two bells each, and Oxford Treble Bob Minor using only three bells with three ringers, rather than the usual six bells. Furthermore, our research uncovered other aspects of change pertaining to the soundscapes created by church bell ringing, including differences in duration and intensity of emanated sound. Following government health advice recommending limiting the time spent within an indoor space, bell ringing duration was reduced from 1.5-1 h at weekly practice sessions at All Saints Church, Singleton , and limited to 15 min solely for services at Hoskins Uniting Church, Lithgow under a specially devised COVID-19 Safety Plan . Of particular interest were the adaptive measures undertaken at St James' Old Cathedral, Melbourne, whereby regular ringing on Sundays was enacted using an Ellacombe apparatus, a system that allows the clappers to be pulled against the bells by just one individual, but resulted in a volume which was significantly lower than usual . Whilst outside the scope of this paper, further research utilizing acoustic measurements could be undertaken to investigate sound pressure levels and spectral variations of these altered sound environments. At the time of writing , it is important to note that parts of Australia have again returned to a total shutdown of non-essential activities due to increased cases of community transmission of the COVID-19 virus. Greater Sydney has been shut down for at least eleven weeks and regional NSW for three weeks, Melbourne for five weeks, and regional Victoria for two weeks. It is expected that an enforced lockdown will continue for some time into the future, and with it, a continued hiatus of tower bell ringing across both metropolitan and regional areas of NSW and Victoria. However, it is important to note that the cessation of church bell ringing is in no way limited to states of Australia. As of April 2021, one year from the onset of COVID-19, it was reported that there had only been a total of five tower bell peals rung throughout the world, and all of these had been rung in NSW , and of the 99 tower bell quarter peals rung worldwide, 68 had been rung in towers associated with the Australian and New Zealand Association of Bellringers . Indeed, there was a total worldwide hiatus on any open peal bell ringing from March 20, 2020 until October 5, 2020 . The issue of a changed soundscape with reference to church bell ringing and restriction associated silence is therefore a global issue, affecting any countries that both ring church bells and have been impacted by the pandemic. --- V. CONCLUSION Clearly, COVID-19 had quite a dramatic impact on the practice of bellringers in both NSW and Victoria and the way in which they could practice their art. It basically silenced the towers for the entire initial lockdown period and then subsequent periods. Even if towers reopened due to capacity restriction changes, restrictions limited the number of people being allowed to engage and participate in the bell ringing activity, so any regular practicing activity had been effectively ceased. Whilst gradual return allowed the recommencement of the practicing activity, we must keep in mind that as the art form of bell ringing requires excellent timing, missing almost one year's worth of physical practice could negatively impact any art community. We show in this example that COVID-19 had a potentially massive impact on community wide soundscapes, first in the silencing of bell sounds; and in cases where sounds were effectively permitted, such as the ringing of an individual tolling bell, the rich sound created by tower bells was basically silenced. While we know these sounds did gradually return, at the moment we do not know what the impact of this change in soundscape had on the community well-being. Whilst we can report and discuss statistics whether churches did or did not ring, social science studies have not instigated research looking at the effect of lack of sound on people. We also need to recognize that we currently do not have any pre-COVID measurements of community well-being scales either. This presents opportunity for future research, and these are activities that we should investigate in the current climate, as it would be prudent to assume that the COVID-19 pandemic will not be the last pandemic to affect our society.
The COVID-19 pandemic has demonstrated how a stochastic disruptive event can dramatically alter community soundscapes. Whilst religious bells have symbolism in many worldwide faiths, the sound emanating from church bells can be considered public domain and therefore, is not exclusive to the church. Pandemic-related interruption of these sounds impacts not only the church involved, but both the surrounding soundscape and any members of the community who ascribe value to these sounds. This paper examines the soundscape of Christian churches in the states of New South Wales and Victoria, to give an Australian perspective one year after the declaration of the COVID-19 pandemic in March 2020. It provides an update of the situation in Australia, building on our previous work from August of that year. In doing so, it explores the activity of church tower bell ringing, and how this "non-essential" activity has been affected, both during and subsequent to the heavy community restrictions applied in Australia. The paper also explores what lengths bellringers have undertaken to be permitted to conduct such activities, such as the use of adaptive measures due to "social distancing", and considers what implications this enforced silence has in similar soundscapes elsewhere in the world. V
This is an insightful book, and the reader will learn much from it. The focus is on the participation of social enterprises in the formal process of procurement, rather than in the more informal process of having its goods or services purchased by government agencies or authorities. That is because social enterprises have a better track record of participation in the latter than in the former. It appears that the size of the social enterprises and their self-rated capacity for preparing proposals matter. However, making connections and building relationships with purchasers can also be helpful. Knowing how to demonstrate an enterprise's social value and impact is also an asset. But in all cases, many obstacles will be faced, and the cost of participating in procurement can be considerable. There is much to be said about funding and increasing capacity building of social enterprises in harnessing the power of procurement and purchasing procedures. The potential for social enterprises is huge if governments truly consider social and environmental value or benefits when making purchasing decisions. But price and quality still dominate the decision process, social enterprises are hesitant and ill-equipped to prepare and submit bids, and there is a scale-up challenge in being able to fulfill a large successful order. However, breakthroughs are possible, sometimes by having several social enterprises bidding together as a supplier to meet the scale of the order. A lot could be achieved with closer links being established between social enterprises and organizational purchasers to demystify the process and educate the social economy bidders on how to respond to the tender. Social enterprise leaders and those responsible for purchasing and procurement decisions in the public or private sectors will gain much knowledge from reading this book about how to close the gaps that separate them still in establishing closer and tighter supply-chain relationships. Researchers and students interested in the social economy will also, by reading this book, deepen their understanding of how difficult it is to make the market economy work for organizations preoccupied by the conditions of citizens who are marginalized, disabled, or living in precarity. --- ABOUT THE AUTHOR / L'AUTEUR Luc Thériault is Professor of Sociology and Chair of the Economics Department at the University of New Brunswick . He specializes in social economy organizations, social policies, and housing and immigration issues. Email: luct@unb.ca
This book is the result of a three-year Canada-wide research project investigating the state of social procurement and social purchasing in 19 work integration social enterprises (WISEs) providing training and employment to marginalized individuals. Part 1 of the book provides an overview of the literature, federal government policies for procurement and purchase with social value, and the results of a unique survey showing surprisingly low participation by Canadian social enterprises in pursuing formal social procurement. Part 2 is based on the study of four social enterprises that have secured large contracts by investing in what the authors call relationship building. Part 3 details five cases where the role of a parent organization's support was key for social enterprises to bid on contracts. Part 4 focuses on the dilemma of five social enterprises regarding their decision to market or not to market the social value dimension of their work. Finally, Part 5 explores the challenges of five social enterprises in managing the concept of multiple bottom lines while pursuing social procurement opportunities. The conclusion discusses some future directions for the study of WISEs' participation in procurement and purchasing procedures.
Introduction This paper aims at presenting a revised theoretical framework for understanding oral health gradients and inequalities in society, and to critically review the traditional oral health approaches, supporting oral health promotion policies and practices. The first question that should be addressed is why so little evidence is available on implementing strategies to reduce global inequalities in oral health, 1,2 considering the body of available knowledge and technologies to promote health and prevent a large number of oral diseases. A key policy rationale for reducing social inequalities is the universal finding that health indicators are better in more equal societies. It is a well-known fact that oral diseases are more common in less equal societies and among socially disadvantaged groups. [3][4][5] In recent years, new insights have been gained into the contemporary patterns of oral health inequalities in high-and middle-income countries. Oral diseases, as is the case with other health outcomes, are socially patterned across the entire social hierarchy, a relationship known as the social gradient. 6 Even in high-income countries where absolute poverty is very rare, there is a fine and graduated pattern of inequality in health across the full socioeconomic spectrum. 7,8 The universal and relative stability of the social gradient therefore suggests that there is a greater gen-Declaration of Interests: The author certifies that he has no commercial or associative interest that represents a conflict of interest in connection with the manuscript. Braz Oral Res., 2012;26:86-93 eralized susceptibility to a whole range of diseases as one descends down the social gradient. 9 A social gradient in oral health has also been evidenced in a great variety of populations in several different countries, for different outcomes and at different points in the course of life of different members of society. 6,10,11 The enduring nature and universality of the social gradient in health and oral health status indicates the influence of broad underlying factors rather than specific disease risks. The reasons for the failure to put what we already know into practice effectively should head the agenda of dental organizations and government agencies. The greatest challenges in the future are to turn knowledge and experience into disease prevention and health promotion, leading to effective, scheduled action. 12,13 This critical review, set down in a descriptivediscursive style, presents oral health disparities mainly determined by social factors, as evaluated by national and international literature on the subject of health inequalities. It examines the formulation of a scientific and political agenda on oral health promotion and disease prevention, with some final recommendations. --- Methodology A critical and integrative literature review was conducted from the theoretical point of view of the main social determinants of oral health and the benefits of oral health promotion. This revision was based on the criteria of quality and readability; therefore, whenever possible, we employed the RATS checklist. We performed an appropriate literature search using electronic databases and Boolean operators, combining the following search terms : --- Results and Discussion Most dental strategies to prevent oral diseases are directed at changing behaviors. Unsurprisingly, strategies to change behaviors have had limited positive impacts on oral health. 14 Policy makers should therefore recognize that people live in behaviorshaping social, political, and economic systems and that these people should have access to the resources they need to maintain good health. 5,15 It is very important to have a better understanding of the causes of people's behaviors, that is, "the causes of the causes." 16,17 Why do people behave how they do? There is interplay among intrapersonal, behavioral and environmental determinants. Behaviors are linked to the conditions in which people are born, grow, live and work, and to age. 18 Although individuals make choices about how to behave, these choices are made within economic, historical, family, cultural and political contexts. Therefore, individual behaviors, commonly referred to as proximal factors, are largely influenced by social environments, and some structures make it easier to promote healthier lifestyles than others. The shortcomings of current approaches to globally reducing inequalities and improving oral health point out the important role of social determinants, and associate these determinants to the need for research and policies in order to implement strategies to reduce oral health inequalities. Tackling inequalities in health requires strategies tailored to the determinants and needs of each group along the social gradient. For this reason, an initiative was established in 2009 addressing the issue of social determinants and inequalities as factors influencing oral diseases, and proposing strategic interventions aimed at dealing with these problems. 19 f. a research agenda, which will lead to key improvements in global oral health, with particular reference to inequalities within and between countries. Past strategies to tackle inequality have focused largely on either improving the health of the most deprived groups or narrowing the gap between the best-and worst-off in society. Universal strategies to address health disadvantages across the social gradient have been few. In many instances policy has focused on downstream interventions, such as smoking cessation services or general practitioner referrals for practicing physical activities rather than tackling upstream causes such as poor living conditions and unemployment. This approach is in contrast with a wide body of epidemiological and sociological work, which suggests that health inequalities are likely to persist among socioeconomic groups even if lifestyle factors are equalized. Indeed, Phelan et al. 20 suggest that the only way to achieve lasting reductions in inequality is to address society's imbalances with regard to power, income, social support and knowledge. The most effective strategy to improve health across the population, and to reduce health inequalities, is to implement upstream policy interventions that reach across sectors and create an environment that promotes healthy living. However, these need to be supported by socially-targeted downstream interventions to mitigate any adverse distributional consequences. Some have therefore proposed a combination of both upstream and downstream solutions. 2,21 Graham 22 identified a spectrum of approaches ranging from 1. remedying health disadvantages to 2. narrowing health gaps, and to --- reducing health gradients. The first goal commits governments to maintaining what is already a long running trend in highincome countries, namely, to securing ongoing improvement in the health of disadvantaged groups. The second goal-to narrow health gapsis more challenging, since it requires a reversal of the trend towards widening health inequalities. To achieve this goal, the rate of a health gain among the poorest groups needs to outstrip that achieved by the comparator group . However, while more ambitious, the goal of narrowing health gaps, like remedying health disadvantages, casts health inequalities as a condition to which only those in disadvantaged circumstances are exposed. Strategies can therefore focus solely on disadvantaged groups, seeking to improve their health in absolute terms and in relative terms. In contrast, the goal of reducing health gradients makes it clear that health is unequally distributed, not only between the poorest groups and the better-off majority but also across all socioeconomic groups. Concerns about determinants of health led to the setting up of the World Health Organization Commission on Social Determinants of Health -WHO/ CSDH. 18 The CSDH analyzes the causes of ill health and the "causes of the causes." The CSDH provides very convincing evidence that the structural factors and conditions of daily life are the major determinants of health and inequalities in health. Health inequalities are produced/reproduced by the unjust distribution, access and effective use of income, goods and services. This directly affects the chances to enjoy life. WHO published a global review of oral health, 13 which emphasized that global problems still persist, despite great improvements in the oral health of populations in several countries. Oral diseases constitute major public health problems worldwide, and poor oral health has a profound effect on general health and quality of life. Dental caries are one of the most common chronic diseases worldwide, Braz Oral Res., 2012;26:86-93 in that 90% of people have had dental problems or toothache caused by caries, and in low-to-middle income countries most caries remain untreated. In most developing countries, the levels of dental caries were low until recent years, but prevalence rates of dental caries and dental caries experience are now tending to increase, and, among rich countries, income inequality is a stronger determinant of childhood dental caries. 13,[23][24][25][26] This is largely due to the increasing consumption of sugars and inadequate exposure to fluorides. In contrast, a decline in caries has been observed in most industrialized countries over the past decades. 13 This pattern was the result of a number of public health measures, including effective use of fluorides, together with changing living conditions, lifestyles and improved self-care practices. However, it must be emphasized that dental caries has not been eradicated as a children's disease, but only controlled to a certain degree. Worldwide, the prevalence of dental caries among adults is high, in that the disease affects nearly 100% of the population in the majority of countries. 13 In several industrialized countries, older people have often had their teeth extracted early in life because of pain or discomfort, leading to reduced quality of life. In developing countries, oral health services are mostly offered at the regional or central hospitals of urban centers, and little, if any, importance is given to preventive or restorative dental care. Public health problems related to tooth loss and impaired oral function are therefore expected to increase in many developing countries. One worthy contemporary exception to this outlook that can be mentioned is the important investment that Brazil is making in the organization of primary care and its family health strategy, aided by oral health teams throughout the country. 27 Tooth loss in adult life may also be attributable to poor periodontal health. Severe periodontitis, which may result in tooth loss, is found in 5-20% of most adult populations worldwide. Furthermore, most children and adolescents worldwide have signs of gingivitis. 13 Aggressive periodontitis, which is a severe periodontal condition affecting individuals during puberty and which may lead to premature tooth loss, affects about 2% of youths. The experience of pain, problems with eating, chewing, smiling and communication due to missing, discolored or damaged teeth has a major impact on people's daily lives and well-being. Furthermore, oral diseases restrict activities at school, at work and at home causing millions of lost school and work hours each year throughout the world. Oral cancer is the eighth most common type of cancer worldwide, and the first most common among men in Southeast Asia. 13 Furthermore, 40-50% of people who are HIV positive have oral fungal, bacterial, or viral infections. Access to oral care is a global problem, particularly in low-to-middle income countries. The workforce available to treat the most common oral health problems-dentistsare in short supply in these nations. The diversity in oral disease patterns and development trends across countries and regions reflects distinct risk profiles and the establishment of preventive oral health care programs. The important role of sociobehavioral and environmental factors in oral health inequalities has been demonstrated in a large number of epidemiological surveys. 13 In addition to poor living conditions, the major risk factors relate to unhealthy lifestyles , and limited availability and accessibility of oral health services. Several oral diseases are linked to noncommunicable chronic diseases, primarily because of common risk factors. 28 Moreover, general diseases often have oral manifestations . Worldwide strengthening of public health programs through the implementation of effective measures for the prevention of oral disease and promotion of oral health is urgently needed. The challenges of improving oral health are particularly great in developing countries. 29 Traditional treatment of oral disease is extremely costly; it is the fourth most expensive disease to treat in most industrialized countries. In industrialized countries, the burden of oral disease has been tackled by establishing advanced oral health systems that primarily offer curative services to patients. Most systems are based on the demand for care, and Braz Oral Res., 2012;26:86-93 oral health care is provided by private dental practitioners to patients, with or without third-party payment schemes. 30 Traditional curative dental care is a significant economic burden for many industrialized countries, where 5-10% of public health expenditure relates to oral health. 13,31,32 Over the past years, savings in dental expenditures have been noted in industrialized countries, which have invested in preventive oral care and where positive trends have been observed in terms of reduction in the prevalence of oral disease. In most developing countries, investment in oral health care is low. In these countries, funds are primarily allocated to emergency oral care and pain relief; if treatment were available, the costs of dental caries in children alone would exceed the total health care budget for children. 13 The current global and regional patterns of oral disease largely reflect distinct risk profiles across countries, related to living conditions, lifestyles and the implementation of health promotion intersectoral actions and preventive oral health systems. Thus, global strengthening of public health programs through the implementation of effective oral disease prevention measures and health promotion is urgently needed, and common risk factors approaches should be used to integrate oral health into national health programs. The CRFA has been widely accepted and endorsed globally by dental policy makers, dental researchers and oral health promoters. 28,30,33 The concept of the CRFA was originally based on health policy recommendations from the WHO in the 1980s, which encouraged an integrated approach to chronic disease prevention. In 2000, the general concept was further developed and applied to oral health with emphasis on directing action at the shared risk factors for chronic diseases, including a range of oral conditions. 34 Since then, the CRFA has formed the theoretical basis for the closer integration of oral and general health strategies. Considerable progress has undoubtedly been made in combating the isolation and compartmentalization of oral health. However, recent research and policy developments on reducing health inequalities suggest that interventions should not be limited to intermediary factors such as health behaviors, but must include policies to tackle structural determinants. 8 Therefore, it is now time to critically update the CRFA in line with the social determinants agenda. Oral health means more than just good teeth; it is integral to general health and essential for wellbeing. The strategy is that oral disease prevention and the promotion of oral health needs to be integrated with chronic disease prevention and general health promotion, insofar as the risks to health are linked. 14,35,36 Strategies to improve health have oscillated between approaches relying on narrowly defined, technology-based medical/dental high-risk approaches and public health interventions focused on tackling behavior change through health education, or on understanding health as a social phenomenon, thus requiring more complex forms of intersectoral policy action, sometimes linked to a broader social justice agenda. 18,[37][38][39][40] Oral health is a neglected area of global health and has traditionally ranked low on the radar of national policy makers. The reasons for this situation are complex and varied. In many countries oral health is not included in national health surveys. Moreover, if data are actually collected, they are usually isolated from the context of general health. Furthermore, in some cultures, oral health is neglected because teeth are seen as expendable. Dentists have also taken little interest in advocacy to promote good oral health, preferring to treat rather than prevent oral diseases. 41,42 In addition, because poor oral health affects morbidity more than mortality, governments have viewed oral conditions as less important than other, more life-threatening diseases. Nonetheless, globally speaking, the burden of major oral diseases and conditions is high. Dentists also cluster in cities where populations that can afford treatment usually live, leaving rural areas deprived of even the most basic emergency dental care. However, training more dentists and building dental clinics-the western curative model of care-is costly and unrealistic in most low-income and middle-income countries. Fortunately, critical changes Braz Oral Res., 2012;26:86-93 begin to be observed internationally. In Brazil, some changes include dealing with the core skills of evidence-based dental practice and offering training with a more humanistic preparation in the undergraduate curriculum. [43][44][45] Promotion of oral health and prevention of oral disease are key and largely possible, and should therefore be a routine part of the work of other health professionals. What can be done? There are three levels of public health interventions that may be adopted to improve the health of the population: 28,46-48 1. Downstream efforts comprise treatments, rehabilitation, counseling and patient education for those already experiencing some disease and disability. This is the level that, while consuming most of the available funds, encompasses a very small segment of the general population; 2. Mid-stream prevention efforts to improve a population's health should involve two main areas: a. secondary prevention efforts that endeavor to modify the risk levels of those individuals and groups who are very likely to experience some untoward outcome; b. primary prevention actions to encourage people not to engage in risky health-compromising behaviors that may increase their chances of experiencing a negative health event; 3. Even further upstream are healthy public policy interventions that include governmental, institutional, and organizational actions directed at entire populations requiring adequate support, by putting into place tax and fiscal structures, stipulating legal constraints and reducing barriers to personal growth, making healthy choices easier and more harmful choices more difficult, and enabling reimbursement mechanisms for those involved in health promotion and primary prevention. The daily use of fluoride is the most cost-effective, evidence-based approach to reducing dental decay. Water or salt fluoridation is a possible population-wide approach but its implementation depends on the development and infrastructure of the country, as well as political will and community acceptance. Promoting the daily use of effective fluo-ride toothpaste is a more realistic strategy, but its costliness inhibits its widespread use in many lowincome and middle-income countries. Governments can eliminate taxes on fluoride toothpaste, which in some countries represent up to 50% of the product's price, and they can also work with manufacturers to produce lower cost toothpaste. [49][50][51] Policies that address the risk factors for oral diseases, such as intake of sugars and tobacco use, can also be implemented, especially because these moves will help reduce chronic diseases. Oral diseases and chronic diseases, such as cardiovascular diseases, cancer, chronic respiratory diseases, and diabetes share many common risk factors. In 2007, a World Health Assembly resolution called for oral health to be integrated into chronic disease prevention programs. 52 Promoting good oral health could also help countries achieve child-related development goals. Caries can negatively affect a child's ability to eat, sleep, and do school work. Preliminary studies have suggested that dental caries and related pain and sepsis might contribute to undernutrition and low weight and height in children in developing countries. In developed countries, studies show that when dental caries are treated, children start to put on weight and thrive. Oral pain is also one of the most common reasons for school absenteeism. Preventing oral disease is important and achievable. Evidence-based, simple, and cost-effective preventive approaches exist, but they need to be rigorously promoted and implemented. 52,53 Professionally speaking, health workers, including physicians, nurses, pediatricians, and pharmacists can all deliver prevention messages about the use of fluoride and the risk factors for oral disease. Politically speaking, commitment is needed to integrate oral disease prevention into programs to prevent chronic diseases and into public-health systems. 54,55 Good oral health should be everybody's business. --- Conclusions This paper outlined why it is essential to put the Common Risk Factor Approach into a broader social-determinant-related and environ-Braz Oral Res., 2012;26:86-93 mental perspective, to tackle oral health inequalities. This broader perspective requires a theoretical CRFA-related expansion, insofar as there is a need to refocus health promotion approaches in order to change behaviors, by incorporating concurrent interventions at multiple levels, including individual, family, community, and society. Future improvements in oral health and a reduction in inequalities in oral health are dependent on the implementation of public health strategies focusing on the underlying determinants of oral diseases.
This article offers a critical review of the problem of inequalities in oral health and discusses strategies for disease prevention and oral health promotion. It shows that oral health is not merely a result of individual biological, psychological, and behavioral factors; rather, it is the sum of collective social conditions created when people interact with the social environment. Oral health status is directly related to socioeconomic position across the socioeconomic gradient in almost all populations. The main priority for dental interventions is that they be integrated collaboratively and enable research and policies that address the main proximal determinants of oral diseases, i.e., sugars, smoking, hygiene, and risk behaviors. Adopting a mixed approach, these interventions should also reduce inequality, focusing on the socioeconomic determinants, to change the slope of the social gradient. The cornerstone of this approach is the Integrated Common Risk Factor Approach (CRFA).
Introduction --- Background and objectives In 2015 there were estimated to be 46.8 million people with dementia worldwide, which will reach 131.5 million in 2050 . Over the next few decades the increase will be greater in low-andmiddle income countries than in high income countries, due to faster ageing of the population and growing diagnostic expertise . Dementia has become a global health priority . Timely diagnosis is important to access care, and because it explains distressing symptoms and enables future planning . Barriers to diagnosis include low awareness and stigma leading to concealment of symptoms ). In Pakistan, these barriers are exacerbated by the low literacy level , and scarcity of services . Furthermore, the expectation in Pakistani culture to provide family care is high and acts as a barrier to help-seeking . Little is known about the prevalence and experience of dementia in Pakistan. The 10/66 research project carried out population-based research into dementia in LMIC , but Pakistan was not among its research sites . The experience of older people in Pakistan merits special attention because of the country's low ranking in the Global AgeWatch Index . This paper is part of a larger study aiming to explore the experiences of dementia in Pakistan from four perspectives: people with dementia; their family caregivers; the general public; and key informants from the policy and practice arenas. This paper presents the findings from interviews with people living with dementia. --- Conceptual framework The way that people conceptualise a disease affects decisions to seek outside help, how people are viewed by society, and coping strategies. People's understanding of disease is shaped by norms and attitudes as well as experiences . Dementia can be perceived in a variety of ways, from the traditional biomedical model to the psycho-social and social-gerontological models . For this study, dementia is conceptualised as an incurable, progressive organic brain illness, that is not a normal part of ageing, but one where the person with the illness is able to live a fulfilling life if enabled to do so by society. Kleinman's explanatory model shows how lay models of a particular health condition must be engaged with in order to achieve a satisfactory outcome . For the lay person, adopting a particular perspective on dementia has particular consequences. Perceiving dementia to be a normal part of ageing means that the individual is conceptualised as healthy and no special help is required , which could be a coping strategy. On the other hand, perceiving dementia to be a mental illness is a stimulus for accessing health services, but also can lead to fear of stigma from others . Blaming the individual for their dementia implies that it is the individual's responsibility to improve their own situation; a form of coping for family members. Finally, a social norm that family care ought to be provided can lead to strong pressure to continue caring without outside help, and therefore burden . All of these scenarios are likely to delay accessing health and support services. It has been argued that some researchers over-use the term 'culture' to explain differences in illness experiences between ethnic groups, and there has been a call to draw out the aspects that are religious in nature . A frequent finding from the literature on ethnicity and caregiving is of religion being used as a coping mechanism, or as a justification for caregiving . However, those findings are from a range of religious and country contexts, and not unique to any religion. One focus of this paper is on the aspects of dementia experience specific to Islam , rather than those that are shared regardless of religion which have been well-reported elsewhere . This focus will more specifically identify the ways dementia is experienced by Muslims. --- Pakistanis, South Asians, and dementia Due to the sparse literature from Pakistan, it is helpful to consider transferable knowledge from research with the Pakistani diaspora . Researchers commonly group Pakistanis with people from the whole Indian Sub-Continent, referred to collectively as 'South Asians'. The diaspora, especially first-generation migrants, experience additional challenges compared to their counterparts in Pakistan, including language barriers, culturally insensitive services, and discrimination. These factors make access to dementia care complex even if services are widely available. Giebel and colleagues argue that South Asians experience similar barriers to accessing mental health services in a variety of countries, and that the literature from one country is transferable to another. Common topics in the literature are understanding of dementia, stigma, and care pathways. Data gathered from people with a diagnosis of dementia and their family caregivers in Pakistan reveal that understanding of dementia is low. Among people from Karachi who had dementia or were caring for someone with dementia, only half were aware of the diagnosis . Similarly, South Asian carers of people with dementia in the UK reported not being familiar with the term dementia before their family member was diagnosed . Dementia is frequently thought to be a normal part of ageing in Pakistan , and among South Asians in the UK , in Norway , and in Canada . The understanding of the causes of dementia was poor, with attributed causes including a contagious disease or tension due to a family rift , past actions of the diagnosed person indicating blame , stress or shock , the evil eye , not praying , or lack of family care . However, some participants drew on the biomedical model of dementia . Explaining the diagnosis to the person with dementia and their carer is recommended in the UK . Similarly, guidelines for clinicians in Pakistan recommend that the diagnosis and prognosis should be explained to the carer . No mention is made in these guidelines of revealing the diagnosis to the patient, however. Despite the guidelines, the studies above imply that the information was not conveyed clearly enough. Giebel et al conducted a literature review of factors that impede access to mental health services for South Asian older people in the UK, US, and Canada. The review found that "South Asian culture" stigmatised mental illness, relating it to a religious punishment. This inhibited access to services, because family members chose to care for the individual at home for fear of stigmatisation. In the UK, South Asian carers reported that the whole family was stigmatised if somebody had dementia, leading to concealment of the person with dementia . One cause of stigma was fear that the person with dementia could give the disease to others through magic. Pakistanis in Denmark were found to have more stigmatising attitudes than other ethnic groups towards dementia . People from minority groups are typically less likely to use dementia services . In Canada, dementia services were accessed through several pathways, including referral from medical practitioners seen for a different condition, advice from family members who were health practitioners, or after a crisis . In Scotland, diagnosis was obtained after seeking help from the general practitioner for memory loss, or when the GP independently recognised dementia symptoms while being consulted for another condition . Access to care depends on availability, which is patchy in Pakistan . There is only one neuropsychiatrist specialising in dementia in Pakistan . In 2014 it was reported that there were only two dementia clinics, one day care centre, and no residential care facilities suitable for dementia in Pakistan . Low levels of awareness of dementia among medical practitioners in Pakistan have also been reported . A consequence of low awareness among physicians is difficulty in obtaining diagnosis and treatment. As commonly found across the world, the majority of day-to-day care for people with dementia in Pakistan is provided by family , sometimes supplemented by paid attendants. In the UK, South Asian older people were more likely than white participants to think that family should be the sole providers of care . Other UK studies have found that a sense of care as obligatory has been linked to Islam . In Pakistan it has been reported by family carers that failure to care would be punished, and fear of God was referred to . Furthermore, the use of institutional care in Pakistan was thought to be sinful and unlucky , and in the UK was viewed by South Asians as a "living hell" and as something only white people would use . --- Obligatory daily prayers in Islam Although the symptoms of dementia are not determined by religion, there may be specific ways in which they can cause distress among Muslims. Muslims are expected to pray five times a day at times determined by sunrise . The person with dementia's deteriorating sense of time orientation could affect their knowledge of when to pray. Prayers are conducted facing towards Mecca. As people with dementia lose the ability to orientate themselves in space, locating the direction of Mecca might become difficult. A person is expected to ritually wash before prayers , but dementia affects the ability to perform basic tasks such as bathing. Prayers may be performed in an appropriate place. In Scotland a Muslim carer reported that their house was dirty due to the person with dementia's incontinence, and was no longer an appropriate place for prayers . Verses of the Quran are recited, most commonly from memory. In addition, there are a specific number of rakats one should carry out for each prayer time . Memory loss, one of the earliest developing symptoms of dementia, may affect the ability to remember how many rakats have been completed, and the ability to recite verses. A Pakistani Muslim man with dementia in the UK reported that he had ceased attending mosque "for fear of 'doing wrong' during worship" . Prayers are meant to be conducted with a clear mind. This usually refers to being free from alcohol or drugs, but it has also been extended to refer to being free from cognitive impairment according to Malaysian research . However, it is not clear if this exemption from prayers is accepted beyond Malaysia. Furthermore, people with dementia may wish to continue praying as long as they can, even if they are exempt. In summary, the symptoms of dementia mean that people with dementia may not be able to follow all the rituals around praying. This difficulty fulfilling the expectations may lead to guilt and distress among the person with dementia and their family carers. --- Research focus As this was an exploratory qualitative study formal hypotheses were not applied from the outset. Instead, we set out to explore respondents' experiences with help-seeking, understandings of dementia, experiences with stigma, and the role of religion among people with dementia in Pakistan. --- Design and Methods The findings in this paper are part of a larger project on understandings and experiences of dementia in Pakistan, carried out by a UK based research team and project partners in Pakistan. This paper focuses on interviews with people living with dementia. There were two urban research sites in Pakistan: Karachi and Lahore. In Lahore the participants were recruited through Alzheimer's Pakistan , while in Karachi the participants were recruited through a hospital-based dementia clinic. Purposive sampling was used to identify patients who had a recent diagnosis of mild dementia, and a balance of men and women was sought in the two locations. Ten interviews were conducted in Karachi , and ten in Lahore ; their characteristics are set out in Table 1. All interviews involved one person with dementia, many of whom were accompanied by at least one caregiver. The caregiver was usually a family member, though in one case they were a paid live-in assistant. Although the focus of the interviews was on the experiences of the person living with dementia, in some cases the caregivers helped to answer. This happened where the person with dementia was unable to recall or articulate the answer. This does not undermine the aims of this project, because the focus of the interviews remained on the experiences of the person with dementia. In other words, the questions were about the lives of the person with dementia rather than the impact dementia has had on the caregiver. This can be thought of as similar to a proxy-report, which has an important role in understanding the experiences of people with dementia who may not be able to share their experiences due to memory or language problems . Caregivers can provide a good account of the person with dementia's experience, however there are cases where caregivers provide a more negative account than the person with dementia would . Bearing this caveat in mind, a minority of the quotations presented here are from the caregiver answering on behalf of the person with dementia, and these are clearly labelled. --- [Insert Table 1 about here] The semi-structured interview guides, participant information sheets, and consent forms were drafted in English by the research team in the UK, translated into Urdu, and tested for ease of comprehension in Pakistan through a patient and public involvement process. The translation was checked for accuracy by an Urdu-speaking member of the UK research team. The interview questions asked about how participants first recognised they had memory problems, their understanding of what caused their memory problems, how they feel about it, how their family and people in the neighbourhood have responded, their access to medical services, any changes they have made to their lives, and their advice for other people with the same issues. Ethical approval was obtained from both a UK and Pakistani University. Pilot interviewing was observed by members of the UK research team during a site visit to Pakistan. Data were collected in Urdu by project partners, and translated into English for analysis. Thematic analysis of the interviews was carried out in English by the UK research team, drawing on a pragmatic paradigm. An open coding process was facilitated by NVivo 11 software. Coding began at the descriptive level, and nodes of similar meaning were grouped under higher-order category parent nodes. A mixture of anticipated and unexpected concepts were coded, following a combination of deductive and inductive perspectives. For example, the node Understanding of diagnosis was created because of the research aims, while the node In God's hands was unanticipated. As a result of the combined deductive and inductive approach the themes reported in this paper are linked to the research aims and interview questions, but have also been directed by the data . Initial coding was agreed by all four members of the UK research team. Once a coding scheme was agreed, coding was completed by the first author. The first author's emerging analytical thoughts were discussed and refined within the team using a combination of email, face-to-face discussions, and formal team meetings. Analysis of interview data collected by another person poses difficulties; since the analyst was not present during the interviews they did not co-create the data. Furthermore, in this case the data were translated into English and so some of the original nuance of meaning may have been lost. In order to overcome these difficulties, the first author discussed potential interpretations of the text with co-authors familiar with the Pakistani, Urdu, and Islamic context. The original interviewers were also consulted for interpretation of the text. Differences in interpretation were resolved through this technique. --- Results The themes include how participants obtained their diagnosis, their understanding of the meaning of their diagnosis, their experiences of stigma, and the interplay between religion and symptoms. --- Pathways to diagnosis Obtaining a diagnosis of dementia is difficult even in countries with dementia policies and wellestablished mental health services. This section explores how the 20 people living with dementia were diagnosed. There were three main pathways to diagnosis, which were named 'Sought help for dementia symptoms', 'Already in the system', and 'Serendipity'. As the name suggests, people who sought help for dementia symptoms had the most straightforward pathway. Either the person with dementia, or their caregiver, recognised there was something not right about the symptoms, and they accessed health services often with their family physician as a starting point. In some cases these participants were highly educated, or had doctors in the family, leading to familiarity with medical concepts and services. This demonstrates the importance of social and financial capital. "Caregiver: Mother is very conscious about this matter. She has got a lot of knowledge about every field. She reads whole newspapers, medical surveys … When she forgot, she said that 'There is something going on in my brain'. Then she said, 'Take me to the neurologist'. She declared that 'I am having this problem so I should consult someone'. She has the knowledge." Those termed 'Already in the system' also recognised the unusual nature of the symptoms, but these participants were familiar with mental health services. Some of them had pre-existing mental health issues, while others had family members with mental illnesses, and knew about Aga Khan Hospital's psychiatric department in Karachi. This meant it was easier for this group to access services for dementia. "Interviewer: Can you tell me about how you got in touch with the clinic about your memory? Person with dementia: My daughter was already treated by Dr [name] in Aga Khan so we thought to consult for me…as well." The third group, called 'Serendipity', were guided towards services by outside influence. Typically, they sought medical help for a different health problem, and the health professional recognised there was an additional problem and advised them to seek out a specialist. This group did not recognise the dementia symptoms as something unusual that required mental health services. Nonetheless, they successfully obtained the diagnosis. --- "Interviewer: What problems did you face? Person with dementia: I had a severe headache … Then, I saw a doctor. The doctor offered treatment: take this [painkiller], take that; no improvement occurred …. When all this increased very much then one doctor said: She has a psychological disease and you must immediately take her to a reputed psychologist so that doctor will give medicine to stop these symptoms." In summary, knowledge of medicine and medical services facilitated the pathway to diagnosis. Those without this knowledge had to rely on symptoms being recognised by doctors. --- Understanding of diagnosis All of the people with dementia in this project had a diagnosis of dementia. We could therefore expect them to be among the most well-informed people about dementia in Pakistan. However, the data show that some of the people with dementia did not know that they have dementia, while some of the caregivers did not fully understand what dementia means. Conversely, some of the participants had a biomedical understanding. Many of the people with dementia and their caregivers had a biomedical understanding of the causes of dementia, relating it to stroke, recognised that it is not a natural part of ageing, and that the symptoms will get worse. --- "Interviewer: What is this disease about? Do you know anything about it? Person with dementia: Yes she [the doctor] told me. --- Interviewer: What? Person with dementia: That it is related to forgetting things. [The doctor told me] that with time slowly, slowly the patient's condition happens to be such that they even forget that they have to go to the washroom. This is how she explained it to me." On the other hand, several people with dementia attributed it to various causes, including shock, depression, stress , old age, or thinking too much. --- "Interviewer: What has happened to you? What are the issues? Person with dementia: Just my confusion is increasing day by day due to over thinking." Similarly, several caregivers attributed the causes of dementia to old age, shock, tension, or bereavement. Some caregivers had a spiritual view of dementia, arguing that the symptoms were caused by 'black magic', or part of preparing for the next life. "Caregiver: He is getting ready for the next world. The next world is the world of imagination and when they start living in the world of imagination then they get tired of everything of this world and it becomes useless and they start forgetting things of this world." Despite a formal diagnosis of dementia being made, participants varied in their understanding. This may indicate that the diagnosis was not explained to them in the clinic, or that participants' lay conceptualisation of illness is preferred over the biomedical model. --- Stigma The interviews asked how other people have responded to the dementia. Negative experiences from community members were unusual, and only one person had experience of neighbours treating the person with dementia badly; some people make fun of the symptoms. Two people speculated that neighbours might make fun of them behind their backs, but they had not actually experienced this. In contrast, most participants reported being treated particularly well by neighbours. Several people said that they were helped by friends and neighbours because of the symptoms. For example, one person was helped home by neighbours when they got lost coming back from the mosque, while another was helped by a shopkeeper. Community members, therefore, were mostly very positive towards the people with dementia. However, such was not always the case within families. Some family members became angry because they thought that the person with dementia was pretending. "Person with dementia: I tell people that I have this disease, they don't trust me. Caregiver: Her brothers disagree. Person with dementia: My brothers and their wives say that I am totally fine." Others reported that their family members were kind, respectful, and supportive, and this attitude was linked by participants to being educated. Overall, there was greater negativity from family members than from neighbours or community members. This could reflect carer stress, or possibly people with dementia interacting less frequently with community members than with family. --- Religion An unanticipated issue arising from the interviews was the difficulty caused when dementia symptoms interfered with obligatory daily prayers. Some people with dementia explained how they forget to pray, or during prayers their mind wanders and they have to start again. In particular, a difficulty with orientation in time interferes with knowing the correct time of day to offer prayers. --- "Person with dementia: Sometimes I don't have any idea that it's morning. To offer prayer is also very difficult. I usually ask a family member … 'Have I offered Zuhr [second daily] prayer?' If they see me while offering they say 'Yes, you have done it'." Two others explained how they had forgotten the parts of the Quran that they had previously memorised. "Person with dementia: Like in the Quran I did hifz [memorizing]. There are very long surat [verses] in the Quran so before that [the dementia] I have remembered all the surats but now I just forget." These participants' insight into how their memory problems affect their ability to perform daily prayers was very upsetting to them. In addition, one caregiver reported how the person with dementia repeatedly asked their family members whether or not they have prayed yet, while another refused to pray when told it is time because she believed the caregiver was lying. Some caregivers reported that the person with dementia cannot remember the number of rakats that should be performed and have to ask. One person with dementia tried to recite verses from the Quran in the bathroom, which is an inappropriate place to pray. Another caregiver explained how difficulty with orientation in space means the person with dementia was unable to lay their prayer mat correctly towards Kabbah Other ways in which religion was discussed included as a reason for providing informal care, as a strategy for coping with distressing symptoms, and trust that whatever happens is according to God's will. In many ways these findings are similar to those of past research and are not specific to Islam. However, the distinct aspects in which they were voiced among our participants tended to focus on the topic of fear. One caregiver explained that the reason they look after their relative with dementia is "because of fear of God and our good upbringing" and because "Allah is still watching" . This indicates that in this case caregiving is not a duty taken on proudly , but instead is shouldered in fear of God's retribution. In summary, religion shaped participants' response to dementia in some ways . However, when dementia came into conflict with fulfilling religious obligations, challenges occurred that remained unresolved for these participants. --- Findings summary Overall, the findings reveal that the participants with high social capital were more easily able to recognise symptoms and seek appropriate help. Participants without this capital still received a diagnosis, but through a more complicated process. Lay understandings of dementia held importance for participants despite engaging with medical services. Stigma from the community was less common than might be expected from the literature. Finally, the practice of Islam was seriously impacted by the symptoms of dementia. --- Discussion and Implications This paper adds important insights to the literature on dementia in LMIC like Pakistan. Although services are few and accessible only to those with resources , these interviews demonstrate that it is possible to successfully obtain a diagnosis of dementia in Pakistan. A segment of the population already had awareness of dementia and other mental illnesses, facilitated by education and family connections, and this group might be expected to obtain a diagnosis. More encouragingly, other participants who did not recognise their symptoms were identified by health professionals and referred on to specialist services, similar to research in Scotland . This demonstrates that there are health professionals in Pakistan who have the knowledge to recognise dementia, which is contrary to the past finding that medical professionals in Pakistan are untrained in dementia . Although the present study is qualitative and not intended to generalise to the whole population of Pakistan, it does provide a more positive perspective on accessing dementia care than previously thought. Having said that, there are likely to be people who are not diagnosed and have been missed by healthcare professionals, and a national screening programme would be required to understand the scale of this problem. Following the Kleinman model , asking people what they think was the cause of their disease can help to provide better care. A common finding from literature in Pakistan, and from South Asians in other countries, is unfamiliarity with dementia before diagnosis, and a belief that dementia symptoms are a normal part of ageing . The clinical guidelines in Pakistan recommend that doctors tell the caregiver about the diagnosis and prognosis, but do not mention telling the person with dementia . As all people with dementia in the present study had a diagnosis, it might be anticipated that awareness of dementia would be high. However, the data show that understanding of the causes and course of dementia was mixed. Some participants adhered to a biomedical model, which functions to give them access to health and support services . Others argued that dementia was a normal part of ageing, which allows them to avoid stigma and blame. Some participants attributed their biomedical understanding of dementia to their educational level and family members with a medical education. Participants spoke about 'uneducated' people being more likely to misunderstand the symptoms, or treat the person with dementia poorly. This finding demonstrates the importance of social and human capital in obtaining the treatment for dementia, and shows how this conceptualisation of dementia serves to contrast themselves favourably to others in society. All of the participants were from an urban setting and fairly affluent, so it is possible that more rural, poorer participants would not have the same level of social and human capital and thus be disadvantaged in accessing dementia care. Previous research has reported high levels of stigma around dementia in Pakistan , and among South Asians in other country contexts . In the present study stigma was less prevalent. Participants reported being treated kindly in the neighbourhood, where people ensured the person with dementia got home safely or received forgotten groceries. People with mild dementia did not seem to be shunned by society in our sample, which suggests forgetfulness or disorientation were not behaviours that attracted stigma. It was less clear from our data if symptoms of more severe dementia would be met with the same kindness. Contrasting with the helpful neighbours, participants reported being treated poorly by family members. This could potentially be linked to family members seeing the person with dementia more frequently and at the more severe stages. This closer contact might lead to frustration on both sides , especially if family members do not fully understand the reasons for symptoms. The issue of religion was particularly illuminating. Previous research on Muslims and dementia has touched upon the idea of incontinence disrupting the cleanliness necessary for prayers at home , or the fear of doing something wrong while in public at mosque . The present data demonstrate the impact of symptoms of dementia on the obligatory daily prayers of Islam. These included difficulty with counting rakats, orientation in time to know when to pray, orientation in space to know the direction of Mecca, and judgement about appropriate places to pray. All of these issues are specific to Islam. Previous research shows similarities between religions in terms of drawing on religion as a reason to provide care, or to cope with carer burden . However, the present study is the first to our knowledge to identify the interaction between dementia symptoms and Islamic daily prayers, and how this causes distress among people living with dementia and family caregivers. Some caregivers assisted their family member to pray by laying the prayer mat, or reminding them when it was time to pray. This kind of enabling behaviour was usually responded to well by people with dementia, and may be an important avenue for intervention in caregiver support. It has been argued that people with cognitive impairment are exempt from daily prayers . No mention of an exemption from prayers was made by participants, so it does not seem to have been part of the discussion about dementia with their physicians. The authors of the present study contacted an Alim who confirmed that a person who does not have control over their mind is exempted from following the obligatory prayers. However, it is important to be mindful that people of different denominations within Islam will have different interpretations of their religion. Having said that, the exemption ought to be acceptable in Pakistan because the Alim consulted was from Pakistan and familiar with the way Islam is practiced in that country. However, there is no known formal position taken by a religious institution respected by all denominations in Pakistan. The participants in the present study did not know about the exemption, and it caused them guilt and distress when the prayers were missed or performed incorrectly. We therefore recommend that faith leaders in Pakistan engage with psychiatrists, neurologists, and geriatricians, and provide to the public information on obligations on praying, and clarity on what dementia is and what should be done about it. Such a strategy has previously proved effective in increasing the take-up of polio vaccinations in Pakistan . Having said that, it could also be recommended that people with dementia are enabled to continue to perform the prayers as long as they wish, as they may gain comfort from the routine and spirituality. Previous research shows the importance of a familiar routine in dementia care , and that meditation is beneficial to cognition . Caregivers could enable prayers by assisting with ablutions, prayer mat placement, and audio recordings of Quranic verses. In cases where guilt and stress are increasing, however, guidance about exemption may be helpful. --- Limitations The limitations of the present study include the small sample. However, the purpose of this qualitative research was not to generalise, but instead to provide insight into a little-studied population and suggest avenues for future research and policy intervention. This goal has been achieved. Having said that, the similarity of findings with previous studies in Pakistan and beyond indicates transferability. Secondly, the sample achieved was quite wealthy and well-educated. There may have been quite different results in a more mixed socio-economic population. A third limitation is that only some of the research team were involved in data collection. The research team has done its best to overcome the distance between ourselves and the original data by dissecting issues of interpretation of text, and members of the research team have the advantage of familiarity with Islam, Urdu, and Pakistan. In future research it would be advantageous to conduct the analysis in Urdu, before translating into English, but this was not logistically possible in the present study. --- Recommendations This paper has presented new data on the experience of people living with dementia in Pakistan, including their sometimes complex pathways to diagnosis and understandings of their diagnosis. Valuable new knowledge about the interaction between dementia symptoms and obligatory prayers in Islam has been generated. Recommendations for practice and policy are to continue to educate clinicians to recognise signs of dementia and refer on to specialist services appropriately, and to improve access to specialist services for people of limited financial means. Enlisting the help of religious leaders and clinical experts in increasing public understanding of dementia, where to seek help, and what it means for daily life, would be advantageous. Finally, more research into dementia prevalence and treatment effectiveness in Pakistan is needed, especially in rural areas that typically have less access to care and awareness campaigns. --- Ethical approval This project was approved by the ethical review board of the University of Southampton [25793] and Aga Khan University [4819-Psy-ERC-17].
The prevalence of dementia will increase in low and middle income countries like Pakistan. Specialist dementia services are rare in Pakistan. Public awareness of dementia is low, and norms about family care can lead to stigma. Religion plays a role in caregiving, but the interaction between dementia and Islam is less clear. Research Design and Methods: Qualitative interviews were carried out with 20 people with dementia in Karachi and Lahore. Interviews were conducted in Urdu, translated to English, and respondents' views on help-seeking experiences, understanding of diagnosis, stigma, and religion were analysed thematically. Results: Although some people with dementia understood what dementia is, others did not. This finding shows a more positive perspective on diagnosis in Pakistan than previously thought. Helpseeking was facilitated by social and financial capital, and clinical practice. Stigma was more common within the family than in the community. Dementia symptoms had a serious impact on religious obligations such as daily prayers. Participants were unaware that dementia exempts them from certain religious obligations. Discussion and Implications: Understanding of dementia was incomplete despite all participants having a formal diagnosis. Pathways to help-seeking need to be more widely accessible. Clarification is needed about exemption from religious obligations due to cognitive impairment, and policy makers would benefit from engaging with community and religious leaders on this topic. The study is novel in identifying the interaction between dementia symptoms and Islamic obligatory daily prayers, and how this causes distress among people living with dementia and family caregivers.
Introduction Earlier work exploring the evolution of urban gay districts sought to explain their genesis and growth via amenity and disposable income-based reasoning, contingent on some previous "historical accidents" in lesbian and gay male settlement patterns . Later work by Collins has drawn on key concepts in the New Economic Geography as propounded by Krugman among others to posit a somewhat more comprehensive, critical population size based-explanation to account for both the emergence and the stages of development of various urban gay villages or districts. Working at this general conceptual level is clearly distinct from the current vogue for detailed ground-level case studies and satisfies different purposes. That said, the latter can clearly inform the former. More specifically, we address the question as to whether one can model changes in urban gay areas without formally examining them directly, relying solely on broader macro-level social and technological trends to explain their decline or re-configuration. This study is premised on the view that such a macro-level focus is both legitimate and feasible and contributes to a relative lacuna in work devoted to the larger scale processes shaping the decline and re-configuration of urban gay areas. The work is informed by multiple strands of evidence which although not necessarily providing conclusive evidence on the pattern and strength of causality of these broader social and technological trends, do combine to highlight the likely mix of important contributory factors leading to the decline and re-configuration of such urban gay spaces. A reading of the developmental model in Collins might suggest that such 'decline' eventually takes the form of integration and assimilation . Others have suggested actual displacement of lesbian and gay space has taken place. Accordingly, the scope for geographical transferability of the model in recent years has been reasonably criticised and questioned . That said, universal direct applicability without at least some modest cross-cultural modifications was never explicitly claimed to be readily feasible. Nevertheless, taken together as a body of work, in historical retrospect, these studies can be argued to have at least offered a range of plausible, though not necessarily mutually exclusive possibilities to analyse the phenomenon of urban gay districts and their developmental trajectories. They may also potentially help inform egalitarian, socially liberal and enlightened public and planning policy seeking to nurture and sustain these urban amenities and resources for their citizens . However, the rapid pace of various distinct and often overlapping social, urban and technological changes that have taken place in the opening two decades of the twenty-first century already warrants a wholesale reappraisal of the status and likely growth paths of these districts and seriously questions the ongoing validity of the extant academic literature as a guide to future development. In this study, the key sources of such change for urban gay villages and districts in the specific context of English towns and cities are set out and that these are acknowledged to be largely the purview of gay men rather than the full spectrum of the gay, lesbian, bisexual, transgender population. By retaining this wholly English geographical focus an attempt is made to control for some cross-cultural factors influencing the phenomenon under scrutiny. That said, Whittemore and Smart have also found evidence for a similar dispersal/deconcentration and potential decline narrative through tracking gay adverts for property rentals and for-sale properties in a US city. In this paper, however, focussing primarily on England, the outcome and future ramifications for the viability of such districts are explored from an explicitly economistic perspective in the light of various strands of secondary evidence. These include, inter alia, some spatial and regional disaggregation of "British Social Attitudes" survey data; lesbian and gay news sources; numbers and composition of social and partner search apps readily available to download and some statistical trends in public house closures and social network and relationship partner search methods. Additionally, some online presence count data is presented to inform the discussion on social and partner search in a sample of English villages, small and medium-sized towns and large towns and cities. The key findings that may be distilled from this study are threefold. Looking forward the developmental model of urban gay villages in England as set out in Collins no longer provides an adequate guide to future development trajectories. The future possibility raised in that study of a declining phase in urban gay districts and a long run equilibrium consisting of a relatively small group of large urban gay villages in cities and a larger number of much smaller gay districts, warrants wholesale revision. More specifically, the 'declining' phase in urban gay districts in England has seemingly already taken hold at a more rapid than then anticipated pace. Scrutiny of their presence, decline and relatively recent absence in many towns in England suggests they are, in the main, disappearing. Liberal social change, the growth of many and varied openly gay and lesbian orientated recreational and social clubs and societies, web platform social networks and the commonplace ubiquity of friend and partner search apps on smartphones have reduced the demand for, and thus rendered seemingly redundant, most smaller gay districts . In essence, almost any home, café and pub can potentially feature, to a very limited extent, some of the functions of physical gay venues. Indeed various studies surveyed herein suggest that websites such as Gaydar, apps such as Grindr, Scruff and Growlr and app versions of some websites such as Gaydar, serve as important social and meeting spaces in gay men's lives. They can be chosen to displace the regular need for specific physical gay meeting venues. Arguably, they are reducing the motivation and frequency of long distance leisure commuting or migration to larger towns and cities due to their population size and thus better partner search matching on specific characteristics of desire. Niche focussed enterprise has also been one of the greatest beneficiaries of the shift to online commercial platforms, reducing overhead costs to producers/suppliers and reducing both out-of-pocket and time search costs for niche consumers. A lower bound estimate of web or app enabled partner search among gay men for meeting new sexual contacts is 40-50% . Inevitably even these seemingly high figures do not take into account the numbers of very infrequent and highly covert users, some already ensconced in gay male or otherwise heterosexual relationships. The paper is organised in the following manner. In section 2 more recent theoretical arguments and broader commentary on the forces changing the size and character of urban gay villages and districts in the twenty-first century are outlined and briefly considered. The following section draws on secondary and primary evidence to set out some 'stylized facts' that better inform the general future trajectory of urban gay districts/villages in England. In section 4 the stylized facts are used to revise and extend, from the 'Integration' phase, the developmental model of urban gay villages set out in Collins . This is undertaken to better take account of the English experience of urban deconcentration and physical decline, but to also recognise the concomitant experience of social and market diffusion of lesbians and gay males through many other physical and virtual spaces . It is contended that this is characteristic of the movement to a so-called "post-gay era". These often feature a legacy of attachment via psychically important physical commemorative markers for visitors and lesbian and gay households residing, or at least in social and leisure commuting distance of, the fewer core urban gay districts/villages that are left in larger city locations. A summary and some concluding remarks are offered in the final section. --- The Development of Urban Gay Spaces in the Twenty-First Century: Brief Retrospect and Prospect Since 2000 there have been a number of case studies drawing out differences in the character and development of urban gay spaces around the world. In the context of mature industrial economies there has been considerable diversity, including observations of recreational specialisation in world cities with multiple gay districts . Others have observed active "re-making" of some gay districts . Kanai and Kenttamaa-Squires find this re-making has resulted in a "…LGBT-friendly mixed neighborhood increasingly shaped by the pro-equality, but primarily pro-tourism and redevelopment, politics of the City" [p.13] and shaped by the forces of "homonormative entrepreneurialism." There have also been calls for greater attention to explore the features and dimensions of gay spaces and lives in the multitude of ordinary towns as distinct from large cities and metropolises or what otherwise might be termed the tyranny of 'metronormativity' . An emerging and increasingly recurrent theme relates to overlapping narratives of physical decline of some gay spaces. For Collins such decline in the English context was envisaged to be part of a development trajectory of assimilation of the area into the fashionable mainstream. The beginnings of this process commences with the development of an urban area already featuring urban decline which then progresses through a number of broad stages of economic enterprise. These stages feature the presence of activities characterised by sexual and legal liminality; the expansion of gay male social and recreational opportunities; widening of the service-sector business base to meet the demands of a growing gay/lesbian market demand and then ultimately assimilation as these businesses are patronised by the fashionable mainstream. The growing gay/lesbian market demand was hypothesised in Collins to be an artefact of cumulative self-reinforcing population growth since a larger gay/lesbian population and its attendant commercial support services is for some individuals provides a source of increased amenity value drawing, in turn drawing in further gay/lesbian in-movers. Yet since much of this increased amenity is also of appeal to the fashionable population mainstream, then the seeds of a move to assimilation are also potentially sown. Some work attributes or characterises this assimilation as deliberate encroachment and appropriation of distinctive LGBT space . For some others it is in large part due to greater and willing adoption of shared sexuality social spaces and also more isolated or transient social spaces in other city locations and an increasing focus on the experience of more overtly residential gay neighbourhoods as opposed to a reliance or focus on more traditional gay village services typically comprising a mixed land use clustering featuring several lesbian/gay entertainment venues and retail outlets . Speculating on how gay identities have been constituted and how they may change in the future is not a particularly recent practice but the study of Nash seems to raise the notion that these spatial changes may be conceived by some very specific segments of the gay male population as part of an inexorable movement to an increasingly post-gay era . Nevertheless, she still contends that "….physical places, no matter how contested still remain a touchstone." In a similar vein, Ghaziani highlights the continuing perceived importance of gay districts for housing "anchor institutions" despite considerable and ongoing residential out-migration . Further, he shows that in these districts physical markers of commemoration can provide a clear indication of an urban sexual culture with a durable legacy valued by many individuals . This durability broadly aligns with the concept of vicarious citizenship set out by Greene which may "…help explain why gay neighborhoods remain relevant among certain LGBT populations who, for a variety of reasons, select into neighborhoods outside established gay areas." [p.1]. For Ghaziani the term post-gay can be "… a mode of self-identification, a way to describe the features of a specific space, a characteristic of an entire neighborhood and a way to think about the zeitgeist of a historical moment" [p.374]. Thus people who identify as post gay are argued to be less territorially defensive of gay spaces and more open to share these and any other social spaces in their city where clearly distinguishing sexualities is simply not that important or felt to be necessary. That said, post-gay does not necessarily fully translate into 'post discrimination' and this seems inevitable given that trends in social attitudes may tread an often slow crossgenerational path, Thus residual intolerance is a likely durable feature in some specific segments of a society's population and which may be revealed in routine housing market processes . Such intolerance has been typically associated with strong religious conservatism . Few geographical studies beyond Ruting , which focus on the development trajectories of urban gay spaces have, hitherto, moved on to explicitly consider the role of social networks and various partner search websites and apps as potentially significant contributors to the processes of decline, reduced in-migration and active deconcentration in gay districts/villages. Yet the literature at the nexus of academic social science and on-the ground public sexual health practice shows that this channel of interaction is now substantial. Accordingly, public health workers have had to shift resources and markedly change their modus operandi to make contact with the vast majority of men who have sex with men . MSM indicates a population beyond men who currently identify themselves as gay or bisexual and includes those who are ostensibly in heterosexual relationships but engage at least intermittently in homosexual activity. Social apps may be hidden or masked on hand held mobile devices and website histories cleared systematically such that this aspect of their lives can be sustained covertly by technological means. Likewise, young MSM may organize and explore their sexuality more readily at lower costi.e. without necessarily requiring any recourse to extensive travel or migration to cities with large urban gay villages . They view their findings as "…helping to mitigate negative conceptualizations of Internet use among gay men." In terms of the theoretical sketch set out in Collins this would equate to both a reduced divergence between, and a shift outwards of, resource constraints for rural and urban residents in the sexual 'market place'. --- Drivers of Change: Social Change and Socio-Technological Developments There are several strands of relevant evidence that that have contributed to changes in the geographical extent, development and pattern of usage of urban gay districts and villages in England. As with other segments of the population, key life transitions prompted by ageing, having children etc. will typically impact on residential location decisions. Yet there are also broader macro-social and macroeconomic trends, allied to widely diffused adoption of technological innovations that can potentially influence such decisions. More micro-level individual behavioural changes in social networking and partner search may thus build on such macro-trends. The empirical evidence presented herein is not identified as or claimed to be completely definitive but does highlight trends warranting further scrutiny. These are considered in turn. --- Changing Social Attitudes Empirically exploring matters pertaining to sexuality generally and same sex relationships specifically can be problematic for various reasons pertaining to sampling and survey design, but in the context of England and Wales there was even antipathy from the UK Government during the 1980s compounding difficulties . Nevertheless, in particular years since 1983, based on responses to their question in the British Social Attitudes survey asking the degree to which homosexual relations are "wrong", NatCen , show that across Great Britain, attitudes have become more tolerant over time. They surmise that this indicates that the British population have reflected positively on legislative changes relating to civil partnerships and same sex marriage and also to public figures being open about their homosexuality. However, they note that this trend has been at times "bumpy" such as around the early 1980s where there was debate, divisiveness and hysteria around the HIV/AIDS epidemic and the introduction of Section 28 in the 1988 Local Government Act intended to support more 'traditional family values'. More recently, BSA data suggests a slight increase in intolerance, with the percentage of responding that homosexual relations were always wrong rising from 20% in 2010 to 22% in 2012. This was the first increase in this measure recorded since 1990, although the percentage responding that homosexual relations were not wrong at all continued to rise up to 2012. The increase in less tolerant attitudes could be due to a general shift in social attitudes or more specific factors such as increased immigration, or some combination of and . Point is relevant to this study because the BSA survey indicates recent migrants to the UK tend to have less tolerant attitudes to homosexuality, in part due to higher levels of religious conservatism among migrants. NatCen highlight the strong linkage between religious belief and tolerance. Though declared tolerance is growing for all irrespective of religion, it is greatest for non-religious individuals and lowest for those with non-Christian religious beliefs. There are, however, important nuances in the spatial differences in tolerance that have potential bearing on the impetus to move to areas perceived to be more 'gay-friendly'. Looking first at the mean attitudes to homosexual relations in the regions, analysis of the BSA microdata undertaken in this study indicates that attitudes have indeed become more tolerant in all regions over time, with large changes, for example, in Wales, Yorkshire and Humberside and the North East of England, all of which initially had the least tolerant attitudes in the BSA data series. i The smallest changes are in London, the region with initially the most tolerant attitudes. However, for London the immigration story may be playing a more significant role in shaping the regional metric of tolerance. This will be explored further below. Delving deeper into this BSA data in order to highlight the extremes of the attitudinal spectrum to homosexual relations we can see from Table 1 that although the data show a consistently declining trend in the percentage of respondents reporting "homosexuality is always wrong", this figure was actually highest in London in 2010-12, the region where it had been second lowest in 1985-7. In contrast, there have been large falls in the percentage in this category in other regions. The most startling decline can be seen in Yorkshire and Humberside, where just 13% considered homosexual relations to be always wrong in 2010-12, compared to 62% in 1985-7. Focusing on the percentage of the population reporting that homosexual relations are not wrong at all, we find that all regions show a consistently increasing trend in the percentage holding the most tolerant views. Consistent with the previous discussion, the most recent data indicate that the pace of change has been far greater in Yorkshire and Humberside and the North-East of England than in London, South-East England and the West Midlands. London in particular has witnessed a relative decline in the percentage of the population holding the most tolerant views since it ranked second from bottom, after the West Midlands, amongst the regions according to the proportion of respondents considering homosexual relations to be not wrong at all. London was the highest ranked region in this category in 1985-7. Accordingly, this may serve to lessen, for some individuals, London's allure as the principal beacon of homosexual tolerance and thus weakened the magnetic draw of London, and to some extent Birmingham, as a source of lesbian and gay in-migration. Turning to urban/rural differences, the BSA survey asks respondents to describe the place they live and this response has also been used to examine 'attitudes to homosexuality'. Again, there are important nuances in the spatial differences revealed. From Table 2, it is clear that mean attitudes to homosexual relations were most intolerant in rural areas, small towns and city suburbs in the second half of the 1990s but that the spatial pattern of attitudes has changed since then. In particular, the "big city" and suburban environs of London have seemingly become relatively less tolerant to homosexuality over recent years in comparison to other parts of Britain, with the sharpest increase in tolerance reported in areas outside London. Focusing on the least tolerant segment of respondents, who think homosexual relations are "always wrong"; there is a consistent pattern of decline over the period, apart from the two London areas. For example, the percentage in this category fell by less than 8 points in the "Big City" parts of London between 1995-2000 and 2006-2012, compared to a fall of more than 18 points in the rural parts of England and Wales. Moreover, there was an increase in the percentage of respondents in suburban parts of London who thought that homosexual relations were always wrong between 2001-5 and 2006-12. There has also been an increase in the percentage with the most tolerant attitudes in urban and rural areas outside London across the three time periods. Over 40% of respondents in each of these areas considered homosexual relations not to be wrong at all in the most recent time period. This compared to a figure of less than 35% in the London suburbs, which also represented a decline on the figure recorded in the preceding period. Whereas in the "Big City" parts of London, there was no change in the percentage of respondents who thought that homosexual relations were not wrong at all across the last two periods. Therefore, although attitudes towards homosexual relations have become far more tolerant right across the UK over the past couple of decades, there is evidence that there has been a recent reversal of this trend in London. As mentioned previously, a possible explanation for this may be political change and/or that parts of London now have heavy concentrations of immigrants, who are more likely to display conservative attitudes, especially if they have strong religious beliefs. Unfortunately, the BSA survey does not collect any information on country of birth, however, questions on ethnic group and religion are asked. Therefore, the combination of information on broad ethnicity and religion may provide some indication of the impact of demographic, as well as wider social, change. It is also well known that university graduates have more liberal attitudes and that London has long been a magnet for young graduates . Table 3 therefore presents information from the BSA surveys on the changing characteristics of respondents in the urban-rural areas within Britain over the same three periods examined in Table 2. Table 3 reveals that the percentage of ethnic minorities in London has risen far more rapidly than across the rest of Britain, increasing by more than 12 percentage points in both parts of the capital between 1995-2000 and 2006-12, compared to less than 3 percentage points across Britain as a whole. Mean attitudes towards homosexual relations by ethnic minorities were more or less unchanged over the three periods but fell quite steadily for Whites. A similar pattern is apparent if information on other religions is examined. ii --- Rise of the Machines: Online and App-Based Partner Search There is a substantial body of literature emerging on the role of new media on gay urban spaces , Usher and Morrison as well as a vast array of research on LGBT life online and the creation of community among other things . Cassidy , for example, shows that new media and online life is interwoven with offline life in bars and other cruising spots in quite complicated ways, and in ways that are reworking how material spaces are experienced. In terms of the narrower objectives bound up with our principally economistic perspective, digital world engagement clearly present scope and opportunities for lowering supply-side costs and lowering consumer search costs compared to' physical world' transactions. In 2012 80% of households in Great Britain had Internet access and this figure rises to over 90% in households with children with the trend moving upwards. In terms of smartphone usage, 72% of people between the ages of 16 and 64 own a smartphone with percentage ownership much higher among younger age bands . We may thus also infer that app users have a lower mean age. With population change, the usage trend is thus also upward. Accordingly, many independent and chain retail outlets have observed increasingly greater volumes of purchases from their own or competitor online store operations. The lower overhead costs of principally webplatformed business and the low search costs of online ordering has as Anderson observed facilitated the emergence of a large number of niche enterprises catering to niche markets alongside the much smaller number of larger volume producers/suppliers catering to the 'mass market'. This phenomenon he describes as the emergence of the 'long tail', implying a Pareto-type distribution in production and supply of goods and services. Commercial partner search seems to readily align with this long tail thinking, though in this case, supply has shifted almost entirely to web or app platformed operation and there are sites and apps catering to broadly defined categories or search pools of partners as well as many more targeted to specific partner types . All these sites have typically free basic membership and many offer 'premium' or enhanced functionality on a free basis for a limited time period. As such they offer users extensive and potentially immediate general or niche partner search at low cost across the entire country. Specifically in terms of apps, scrutiny of Google Play Store on 21 st February 2015 showed that for lesbian and gay dating there are 89 such apps available to download which have options for searching for lesbian or gay partners). Of these 42 are specifically aimed at gay or bisexual men. There were 5 apps specifically aimed at women who identify as bisexual or lesbian and with much lower reported downloads. For Batiste use of apps such as Grindr presents a "re-mapping of social space" which regularly reminds users of clear numerical evidence indicating that the public sphere is less heteronormative than might be assumed. Further, he argues that given these apps facilitates face to face contact through their geo-locational features; they nurture social networking and friendship bonds among gay men beyond territorially explicit gay spaces. For some gay and bisexual men, however, locationally aware mobile technology means that the virtual online world becomes intertwined with their offline world such that it may complicate interactions by co-situating diverse groups of acquaintances, friends and family members. For some 'out' gay or bisexual men this poses no problems. For others a need has emerged to manage potentially multiple online and offline identities which may before these online worlds emerged have been more clearly geographically distinct. Accordingly, there here have now emerged apps which enable sight of users in a local area but with the app 'blurring' the user's actual location. It is also worth noting that there will also be many gay men in long term monogamous relationships who do not use these apps or websites at all and thus usage cannot be used to gross up to provide population estimates of the gay male population in a particular location To provide a snapshot of their usage among gay/bisexual men in a selection of English cities outside London and also some medium sized towns and villages across the regions a simultaneous count of the online presence in these locations was undertaken by three individuals within a one hour period on a Saturday . The search process was previously piloted and rehearsed. The stability of the count was checked for a sub-sample of the chosen locations near the start of the hour and towards the end. The count used two apps and one website which all permit online geographical search functionality across England by place name. Both Hornet and Gaydar are general gay dating and partner search channels, with Hornet claimed to be second only to Grindr in terms of number of downloads from Google Play Store. The Growlr app is targeted specifically at one niche market -members and admirers of the gay 'bear' sub-culture. For an explanation and discussion of the nature, features and extent of this phenomenon see Hennen and Manley et al . The count survey did not include the market leading app for gay/bisexual men -Grindr, principally because it does not offer the required cross country search functionality. Even though gay men with mobile technology typically use a variety of such apps and thus have more than one profile, exclusion of the market leader must mean our data provides only an indicative lower bound estimate of online activity during this hour. Yet this would also be the case even if Grindr could be used since this app, alongside others, does permit users to apply 'filtering' options to exclude profiles with particular characteristics . Profiles on Gaydar may also be selected to be 'hidden' from search lists. Table 4 sets out these lower bound estimates of the immediate online search pool at a variety of different sized locations across England using the place name search function. For three of the city locations, namely, Birmingham, Brighton and Manchester, which are major gay population centres, these cities are defined search 'regions' within which people search in Gaydar. This means that the number of profiles specifically using these city names in their profile is actually far lower than the number in the surrounding area who also considers themselves part of the search pool of these places. Furthermore, in the free version of Growlr used, the maximum number of profiles in any given place name search that may be viewed is 124. So where this value is reported in the Growlr results, this is only a lower bound figure of online presence. The scope to determine in Gaydar the number of profiles within a given location also provides some indication of the numerical extent of the search pool using that channel and thus also the online ratio of visible profiles. For the medium sized towns/cities in the count sample this value is fairly consistent ranging from 16% to 23% of users online in those locations. This suggests approximately a fifth of Gaydar users were online within these medium sized towns/cities during the count and it may be this figure can translate across to other similar apps, including those catering to a more niche market. What is perhaps unsurprising is the small numbers choosing to be revealed online at that time in the villages and small towns surveyed . However, for several of the medium sized towns/cities the immediate search pool, across the country, at that time seems fairly substantial in both the mainstream channels -Gaydar and Hornet and also the niche market app, Growlr. See, for example, the online numbers in Burnley, Chichester, Durham, Scunthorpe and Shrewsbury. Thus in these locations, individuals with average tastes but also many with some more specific tastes will likely have a reasonably sized 'backstop' search pool immediately available to initiate potential social or sexual contact. Thus the use of this technology can potentially impact on the trading vitality of local gay pubs , particularly among more income constrained younger age cohorts. --- Pubs Historically, public houses in the UK have occupied a key role in supporting and building lesbian and gay communities and gay districts/villages but now even the basic question as to whether gay pubs are needed any longer has entered the realm of popular discourse . A lower or negligible frequency of visits to support partner search and social networking has been linked to more liberal social attitudes and the use of websites and apps for partner search. Yet irrespective of these phenomena there has been a serial decline in customer demand for pubs and thus a steady decline in the number of pubs remaining in business . The total number of pubs in the UK has steadily fallen each year from 67,800 in 1983 to 48,006 in 2013, even though beer sales have recently started to grow again, albeit not just from pub sales . Clearly pubs catering for LGBT customers cannot have been immune to the social and economic forces contributing to this broader sectoral decline . These include changing tastes among the young for socialising in other types of venues, outlets and for other activities . It is difficult to obtain sources of accurate and consistent data on the numbers of gay pubs, clubs and other liquor-licenced premises over time in Britain. Nevertheless we do provide some information of this pattern over time from the listings within the UK publication, Gay Times, from the British Library for five-year intervals covering the period 1985 to 2005. The data for each March issue are presented in Table 5. The March issue was chosen to control for any post-festive period effects . Between 2005 and 2010 the listing information went completely online only. In overview, Table 5 shows that the number of licenced venues grew from 1985-2000 and the growth was quite rapid in areas outside London. The growth appeared to have continued in some medium/large cities up to 2005 but this was at a slower rate, whereas there was a noticeable decline in the number of venues in London as well as some falls in other cities. That said, there has been considerable flux or churn in the venues appearing in the London listings and in terms of their location within London. iii --- Stylized facts Distilling the statistical and other evidence examined it is possible to develop some stylized facts to help inform the subsequent reconsideration and extension of developmental models of urban gay districts/villages in England as typified by Collins . These are set out below. There is a trend indicating increasingly tolerant attitudes to homosexuality over time and space, spatially extending beyond a few core metropolitan regions typified by liberal social attitudes and durably permeating into regions formerly typified by more conservative social attitudes. There is a spatial trend featuring increasingly tolerant attitudes to homosexuality moving across the urban to rural spectrum, which has been accentuated in some larger urban areas by population changes associated with immigration. iii) The diffusion of Internet access and mobile-based apps contributes to the erosion of the demand for commercially sustained physical gay spaces, venues and enterprises. The diffusion of Internet access and mobile-based apps reduces the search and transaction costs for gay male 'partner' search across both broad and more niche 'bundles' of desired characteristics. --- Integration, Decline and Diffusion of Urban Gay Spaces Many large towns and cities in England have had sizable gay villages or districts with a relatively wide supporting services sector base. That they have evolved in some locations often from only one institution and often in off-central locations has been explored in Collins . That evolutionary trajectory is characterized and depicted as Stages 1 to 3 in the developmental model contained therein. Taking a prospective gaze it is possible to take account of the stylized facts established in the previous section to re-fashion the 'Integration' phase of that model and formally extend the model. A gay critical mass population was deemed the key requirement for the evolution and sustainability of the network of gay villages and districts that came to be established in England in the Twentieth century. Looking forward in the light of our stylized facts, this critical mass seems now destined to rapidly dissipate and diminish over time . As a consequence physical decline is prompted in the network of gay villages and districts in England, alongside decline in the geographical extent, volume and variety of gay village and district services in any given location. The model extension is set out in Table 6. The model suggests that the retrenchment to a smaller commercial core of such gay villages can be simply conceived of as the outcome of revenue considerations. Those remaining seem likely to be those that have more diversified and hence resilient revenue streams, including significant spending from LGBT tourists and also a substantial social and leisure commuting segment of the LGBT population . This residual core is thus more likely to host the remaining physical 'anchor institutions' and largely serve to support the 'vicarious community' and 'vicarious citizenship' needs of a much more geographically extensive LGBT hinterland than was previously the case in the late Twentieth century. Increasingly, the centre of gravity of the gay socialscape is envisaged to change. It is suggested that the trend is for this to become highly spatially diffuse and become largely a feature of suburban areas and small and medium-sized towns across England. It is contended that mainstay channel of LGBT social and sexual community will, in effect, primarily reside in the online world but physically occupy mainstream social spaces whenever required. Rural settlements, over a longer time period, are also argued to become subject to these same socio-technological changes as suggested by the stylized facts. Displacing much long distance leisure/social commuting to urban gay villages, rural LGBT residents are thus anticipated to more routinely align to suburban or small/medium sized town LGBT online social worlds and relationship search pools. As noted above in the discussion of evidence from the BSA survey, the spatial picture with regards to attitudes towards homosexual relations has changed significantly over the past two decades. The largest reductions in negative attitudes towards homosexuality have been observed outside of London, especially in rural parts of Britain. Allied to this convergence in attitudes, a trend towards more intolerant views towards homosexual relations appear to have emerged in London in recent years, thereby reversing a pattern of ever more liberal attitudes towards homosexuality. Demographic change, particularly with respect to immigration, now seems to be a playing an important role here and may be making London a less attractive location for gay men and lesbians, especially given the other influences identified in the paper. Ultimately the model characterizes physical and socio-technological change that triggers and traces a transition path from integration of gay villages and districts into the fashionable mainstream through to a post gay era where the clear momentum of gay social and sexual activity lies for most individuals well beyond city locations. That said in some of these cities that retain LGBT anchor institutions, there will still remain strong connectivity denoted by the bonds and ties of vicarious citizenship. --- Summary and Concluding Remarks Developmental models and case studies of urban gay villages and districts since 2000 have been reviewed and reassessed in this paper in the light of substantial socio-technological developments and changing social attitudes. Decline and/or significant re-configuration has been widely identified. We find that here has even been some significant contraction in numbers of entertainment and partner meeting venues in the few larger city-based urban gay villages excluding London. In the case of London there has been significant churn and re-location. For the larger city-based urban gay villages this pattern of decline and reconfiguration has taken place despite them benefitting from a larger, more diversified revenue base by virtue of social and leisure commuting from a wider regional hinterland and their status as domestic tourist amenities and attractions. Significant socio-technological changes are argued to have been an important contributing factor, irrespective or regardless, of any perceptions or observations suggesting the successful outcome of revanchist appropriation of valorised lesbian and gay spaces for capital accumulation. That said, with contraction and decline, even if only due to structural changes in demand, retail and entertainment venue properties in gay villages are inevitably sold for similar or changed use for non-lesbian and gay male targeted custom. Furthermore, routine residential property sales by lesbian and gay households to heterosexual households must take place. Inevitably this can contribute to a change of character for some gay districts and villages. Additionally, in the specific context of England, where a cornerstone feature of gay villages and districtsthe gay pubwas considered instrumental in the evolution of gay spaces as they moved to a critical gay population sizethe situation has dramatically changed. There has been broad sectoral decline in the wider UK pub sector. However, coupled with lower pub usage by smartphone using gay men and lesbians , this has prompted many gay pub closures and thus discernible contraction in the level of provision. Analysis of the BSA microdata undertaken for this study has revealed increasing tolerance to homosexual relations across the country and across the urban-rural spectrum. Allied to the technological developments afforded by the Internet and mobile geo-locational social and partner search apps, the lifestyle pressure for gay men and lesbians to migrate to big cities, or engage in high frequency long-distance social and leisure commuting, has been eroded. Hence, while the critical gay population mass argument may have had some considerable explanatory legitimacy in an historical perspective, looking forward, this argument seems to have frayed and been undermined. Online social networks and partner search apps make available both broad and niche desire characteristics more readily available, even in outer suburbs and smaller towns and cities across England. This has been demonstrated empirically through the results of a snapshot count survey of gay male 'online presence' across the English regions, in settlements of various sizes. More recent data from the BSA survey suggests that London has also seen some growth in segments of its population holding the least tolerant attitudes to homosexuality. For some lesbian and gay households it can be a discernible tension that may serve to dampen London's specific appeal as an attractive long-term residential option. Possible explanations for this finding have been explored and may be linked to high levels of immigration into London, especially in connection to those individuals holding more conservative religious attitudes. This is also likely to be compounded by high overall house prices in the capital . Various strands of supporting empirical evidence have been distilled into a set of stylized facts. These have, in turn, been used to inform, revise and extend the developmental model of urban gay villages and districts in England as set out in Collins . In so doing, the model moves further into line with recent work suggesting that metronormative analyses of gay spaces should give way to a greater analytical focus on the challenges, lives and experiences of lesbian and gay households in suburban areas and small/medium-sized town settings. The changes identified in this paper also suggest that the spatial distribution of the homosexual population, and supporting, industries, is likely to display further fluidity in the future. 1. There was a change in regional boundaries in the BSA in 2006. However, the regional definitions should largely be consistent over time. 2. The percentage of respondents in each category and mean ratings is weighted. 3. Respondents who did not provide an answer or did not know have been removed from the mean ratings. 4. The number of observations is based on all responses to the questions and is unweighted. 1. The type of area where the respondent resides started to be asked in the BSA survey in 1995. 2. The percentage of respondents in each category and mean ratings is weighted. 3. Respondents who did not provide an answer or did not know have been removed from the mean ratings. 4. The number of observations is based on all responses to the questions and is unweighted. 1. The percentage of respondents in each category is unweighted. 2. Mean attitudes to the homosexual relations question are weighted. Respondents who did not provide an answer or did not know have been removed from the mean ratings. 3. The comparatively high percentage of Christians in the Big City area of London in 1995-2000 is the result of a relatively large number of respondents identifying themselves as Christians in 1999 and 2000. i There has been a change in the regional variable used in the BSA from 2006, see notes to Table 1. Responses to the homosexual relations question have been grouped into periods because of the relatively small number of responses in some regions in any one year. The grouping of years into periods is not very sensitive to different options given the fairly constant change in attitudes towards homosexual relations. ii The growth in the percentage of respondents stating that they had non-Christian religious beliefs was highest in both London areas and the mean attitudes of this group towards homosexual relations was fairly constant over the three periods, whereas it fell quite noticeably for Christians and non-religious people. These trends have been counter-balanced by London's continued ability to attract university graduates, although the share of graduates has risen in all of the areas and the tendency for more liberal attitudes towards homosexual relations has been more marked for non-graduatesnarrowing the mean difference over the three periods from 1.1 to 0.6. iii It was not possible to add information for March 2010 since the listings of venues in Gay Times went online sometime before 2010. A list of current venues appears on the Gay Times website, and from viewing these details at the time of writing , it would appear that there has been a decline in the number of listed venues in many towns and cities but that the decline in London may have been arrested to some extent, although direct comparisons are difficult.
The development of urban gay villages in England has previously been explored via the conceptual toolkit of the New Economic Geography. While arguably retaining explanatory legitimacy in historical perspective, looking forward, its validity is contended to be terminally undermined by changes in broader macro-social trends. The intention of this work is to address a relative lack of attention devoted to broader macro-level processes contributing to the decline or significant re-configuration of urban gay areas. A revised developmental model is presented and considered as part of a transition stage towards a post gay era.
Increasing the length of working life is a major policy goal in many countries. In the United States, the Social Security retirement age has been increased from age 65 to age 66 for individuals born in [1943][1944][1945][1946][1947][1948][1949][1950][1951][1952][1953][1954], and it will increase further for the cohorts born in 1955 and in later years . Yet how the length of working life has developed in general, and at older ages in particular, is not currently known. Specific transitions during working life have been studied extensively, such as the transition from retirement to work. In contrast, the question of how much accumulated time is spent in work at older ages has received less attention, despite its importance for policy-makers . Like life expectancy, the duration of working life can be studied from a cohort perspective or from a period perspective . In the cohort perspective, the average length of the working trajectories of individuals born during a given time is considered, for example, the duration of the working lives of individuals born in 1940. In the period perspective, the conditions of a single year are assumed to prevail during the lifetime of a synthetic cohort, resulting in artificial working trajectories. While the period perspective is a useful summary measure for the conditions of a single year, it does not reflect the experiences of any real cohort . For the United States, studies using the cohort perspective have been rare. The exceptions are studies conducted by Hayward and Grady and Hayward, Friedman, and Chen based on data from the National Longitudinal Survey of Older Men . These authors studied males aged 45 and older from the 1907-1921 cohorts, and covering the years 1966 to 1983. Since that period, the likelihood of working at older ages has changed considerably . How the length of working life has developed for more recent cohorts is not known, though. Moreover, no results are available for females or for the foreign-born population, whose working trajectories differ from those of native-born males . In this paper, we use longitudinal U.S. administrative data from the Continuous Working History Sample to study for the first time the length of working life at age 50 at the population level from a cohort perspective. The CWHS is a 1% sample of Social Security numbers and the associated earnings trajectories. Using information on nearly 1.7 million individuals, we assess the employment trajectories at ages 50+ of the cohorts born from 1920 to 1965, and present the results by gender and nativity . We focus on two main measures of working trajectories: cohort age profiles of employment; and cohort working life expectancy , defined as the expected total lifetime spent in employment from age 50 to age 74. We focus on ages 50+, as employment at these ages has changed considerably over time . Trends in employment at older ages are driven by many factors, including social policy reforms and increases in the retirement age; changing dynamics of exit from and entry into the labor force through, for example, phased retirement or unretirement; and economic conditions. Calculating cohort WLE at age 50 allows us to summarize the net effect of all these factors on working trajectories. Comparisons of cohort WLE across groups show to what extent differences in the factors driving working trajectories accumulate or cancel each other out. This paper contributes to the literature on aging and employment in several ways. We provide the first comprehensive study of U.S. cohort data for birth cohorts from 1920 onward using a large, high-quality sample of Social Security Administration data that has so far been largely untapped for aging research. Moreover, to the best of our knowledge, we are the first to report extrapolated working trajectories for the United States, which offer a glimpse into the future. Finally, we provide results for the foreign-born population, an understudied but growing part of the U.S. population. --- Background --- Working at Older Ages in the United States The labor force participation of older individuals since World War II can be broken down into two phases. In the first phase, labor force participation rates at older ages decreased for males, and were flat for females. In the second phase, from the 1990s onward, participation rates increased for both older males and older females . According to the Bureau of Labor Statistics , 46.2% of males aged 55 and older were part of the labor force in 2016, up from 38.3% in 1996. For females aged 55 and older, the corresponding figures were 34. 7% and 23.9% . Several explanations have been offered for these trends, including changes in norms and preferences among cohorts ; improvements in health, longevity, and educational attainment ; and social policy reforms. These reforms include major legislation in 1983 that increased the full retirement age from 65 to 66 and from 66 to 67, and measures that reduced benefits for early retirement. In addition, lawmakers removed the earnings test beyond the retirement age . Moreover, enhanced benefits for those who delay claiming their benefits beyond the full retirement age have been introduced. Nevertheless, Social Security reforms and changes in the composition of population explain only a modest share of the changes in labor force participation rates at older ages . Economic conditions might also play a major role in shaping the working trajectories of older adults . While there have been several recessions since the 1980s, the effects of these downturns on the labor market and on older adults were mostly moderate or short-lived . However, the effects of the 2007-2009 recession might have been more severe. Farber found that unemployment increased sharply, and Coile and Levine reported that unemployed workers had a higher probability of retiring than employed workers during this recession. Men were more affected than women by the crisis . Whether the recession has had a significant impact on the duration of working life among the affected cohorts is unclear. Research has shown that working after reaching the full retirement age has become more common in recent years, especially among men . This observation is in line with the consistent finding that for many older people in the United States, the transition from work to retirement is not a one-time, permanent transition, but is instead made up of more complex sequences of part-time work or bridge jobs , or returning to the labor market after retiring . Retirement patterns differ by gender, ethnicity, education, and other variables; with white, highly educated males most closely following the conventional pattern from full-time work to retirement . An increasing share of the older population is foreignborn, and the proportion of immigrants in the workforce aged 55+ is rising. In 2017, this proportion was 15%, according to the Bureau of Labor Statistics . The working trajectories of foreign-born males differ from those of native-born males in several ways. First, the employment rates of native-born males tend to decline more sharply with age. Second, there is a cross-over age, before which the native-born have a higher likelihood of being in employment, and after which the foreign-born are more likely to be working. This pattern is partly attributable to the restrictions on eligibility for Social Security benefits faced by immigrants . Other potential reasons why immigrants may retire later than their native-born counterparts are they tend to have lower incomes , and they are often in better health . --- Working Life Expectancy The length of working life has received far less attention in the literature than general and age-specific trends in labor force participation and employment. The small number of existing studies on the duration of working life used WLE as a main measure . In this paper, WLE is defined as the average lifetime spent in employment from age 50 to age 74, but in the literature, other age ranges are also used. WLE has mostly been analyzed from the period perspective; that is, using artificial cohorts. Studies that have examined WLE from the cohort perspective are rare. The data demands for cohort studies are higher than those for period studies, as observing the working trajectories of a single cohort requires data covering many years. Moreover, as many cohorts have not yet reached the end of their working lives, their WLE is incomplete. The few existing studies on WLE include Leinonen and colleagues for Finland, Denton, Feaver, and Spencer using Canadian data, and Liefbroer and Henkens for the Netherlands. Leinonen and colleagues and Denton and colleagues also compared results from the period perspective with results from the cohort perspective, and found that they differed. It is unclear whether inequalities in WLE found using the period perspective are similar to those found using the cohort perspective. The only studies for the United States that adopted a cohort perspective were conducted by Hayward and Grady and Hayward and colleagues using data from the National Longitudinal Survey of Older Men . Hayward and Grady studied men of the cohorts born from 1911 to 1921, and found that their WLE at age 55 was around 8.7 years. Hayward and colleagues reported, among other things, the partial WLE from age 55 to age 75. According to their estimates, WLE at these ages was 8.0 years for white men and 6.1 years for black men. This gap in WLE found between whites and blacks can be explained in part by differences in health and mortality. In light of these findings on labor market trends and WLE, we can expect to find that the length of working life has been increasing among the more recent cohorts, and especially among females. This implies that the gap between males and females has likely narrowed. Moreover, we can expect to find that the recent reforms and the 2007/2008 recession are having clear effects on cohort WLE. It is, for example, likely that the recession slowed the increasing trend in cohort WLE. In addition, we can expect to observe that the increases in WLE are at least partly attributable to increasing employment at older ages. Making predictions for the foreign-born population is more difficult. However, given the differences in the age patterns of employment between the foreign-born and native-born populations, we can expect to find that a larger part of the WLE of the foreign-born is at older ages, at least for males. --- Data and Methods --- Study Population and Measurement Our longitudinal data comes from the Continuous Working History Sample . The level of observation is the Social Security number . For each SSN, we have information on the earnings trajectory and any old-age pension and disability pension benefits that were received. The data comes from a 1% sample of all SSNs that covers the years 1970 to 2015 for the cohorts born between 1920 and 1965. The data includes information on individuals with SSNs from all U.S. states, as well as from Puerto Rico and other U.S. territories. For each SSN holder, we have information on their gender, birth year, and year of death, if applicable; and on whether the person was born in the United States or in another country . We calculate the age as the year of observation minus the birth cohort. This means that the age variable is defined as the age reached during a given year. Throughout our analysis, we will assume that each SSN relates to a distinct individual. This is likely to be the case for most SSNs, but not all, as in rare instances one individual has several SSNs. For some of the individuals with several numbers, the CWHS includes an indicator that reports this information. As in most such cases, the individual's earnings are associated with only one of his/ her SSNs, while the other SSNs are associated with little or no earnings, we dropped all known multiple SSNs that are linked to little or no earnings. As the Social Security Administration issues multiple SSNs only in very specific circumstances, the number of these cases is small, for example, for the 1940 cohort, these individuals accounted for less than 0.5% of the sample. Thus, excluding these cases left the results virtually unchanged. Employment is captured through annual earnings. We define an individual as employed for a given year if their earnings are above the threshold for a "quarter of coverage" for that year. A "quarter of coverage" is used to determine whether an individual is insured under the Social Security program. The threshold that must be reached to earn one QC has changed over time. Before 1978, for instance, a wage of 50 dollars or more for one quarter of the year was sufficient to earn one QC. But since 1978, when the reporting of earnings changed, QCs have been awarded based on annual earnings. The wage required to earn one QC was 250 dollars in 1978 and was 1,220 dollars in 2015. Alternative thresholds are discussed in the Supplementary Materials. Before 1978, somewhat different rules for earning QCs applied to the self-employed than to workers in dependent employment. To ensure consistency in our main analysis, we adopted the QC rules, and thus applied different rules for self-employed and dependent earnings before 1978. Details and alternative analyses are presented in the Supplementary Materials. These analyses also include adjustments for the changes in the earnings coverage of the CWHS. The results of these analyses are very similar results to those presented here. --- Statistical Methods For each cohort, we calculate the average number of person-years spent in employment by age. The person-years spent in employment are calculated by assuming that individuals with earnings above the threshold of one QC spent one full year in employment. Individuals who had earnings above the threshold, but who also either received retirement benefits or disability benefits or died during that year, are assumed to have spent a half year in employment. All of the other individuals are counted as having zero years in employment. Given the total number of person-years spent in employment age, the average number is calculated by dividing it by the cohort size at age 50. WLE at age 50 is calculated as the sum of person-years spent in employment between ages 50 and 74, divided by the cohort size at age 50. We have chosen 74 as our upper limit because it is close to the limit used by Hayward and colleagues , and it enables us to calculate the cohort WLE as for many cohorts as possible. Our results will be rather close to the actual cohort WLE because the number of individuals over age 74 who are in employment is extremely small, and contributes little to the overall WLE. More formal descriptions of the calculations are given in the Supplementary Materials. To complete the WLE for the cohorts for whom the last observed age is below 74, we borrow information from the older cohorts, following Leinonen and colleagues . If the time spent in employment at age x is not observed for one cohort, we take this information from the youngest cohort for which it is available. This method shows how the length of working life would develop if the conditions of the last period observed stayed constant. For the cohort born in 1942, only employment at age 74 was borrowed. Thus, the resulting WLE is likely very close to reality. For the 1965 cohort, on the other hand, only employment at age 50 is observed, and the resulting WLE strongly depends on the assumptions of this extrapolation approach. The WLE of the cohorts born from 1943 to 1964 is in between these two extremes. In addition to the approach by Leinonen and colleagues , we implemented several alternative techniques to extrapolate employment at older ages. These led to qualitatively similar results . Working trajectories and WLE are adjusted to take into account the possibility of an inflated cohort size due to unobserved outmigration. Outmigration is not captured in the data. We can also assume that deaths that occurred abroad are not captured, especially for the foreign-born population. For instance, if an individual migrates to the United States, works there for a certain period of time, and then returns to their home country, when the individual left the United States and when they died may not be recorded. Instead, the SSA record would simply show years with no contributions and with no benefits received, potentially up to high ages. To deal with this challenge, we exploit the fact that outmigration leads to the appearance in the data of "immortal" individuals who never seem to die. These cases can be detected by comparing the CWHS data with life table data from the Human Mortality Database . Our comparison of these datasets showed that there are indeed too many surviving individuals in the data. We have removed these individuals from the sample. For details, see the Supplementary Materials. Alternative approaches also discussed in the Supplementary Materials led to similar findings. --- Results --- Sample Size In Figure 1, the sample sizes for all birth cohorts by gender and place of birth are shown. In total, our analysis covers 1,675,011 individuals: 686,212 native-born males and 685,409 native born females; and 156,652 foreign-born males and 146,738 foreign-born females. Overall, 18% of the individuals in our sample are foreign-born. The sample sizes vary by cohort. For instance, for native-born men, the smallest sample size is 10,253 for the 1933 cohort, and the largest sample size is 20,783 for the 1960 cohort. The sample sizes are smaller for foreign-born individuals, with the smallest sample being 1,459 foreign-born females of the 1920 cohort. The sample sizes increase by cohort, and reach more than 5,000 for the foreign-born cohorts born in later years. --- Working Trajectories by Cohort and Age Figures 2 and3 show results on employment trajectories by cohort and age. Specifically, Figure 2 shows age trajectories of average person-years spent in employment by gender and nativity . Each line represents selected cohorts from 1920 to 1965 . The oldest cohort is shown in a light color , and the youngest cohort is shown in a dark shade . The cohorts born in 1920, 1925, and 1930 Looking at the figures, we can see that males generally had higher employment rates than females, and that native-born individuals generally spent more person-years in employment than the foreign-born. Employment varied strongly with age. It is also clear that employment after the retirement age-age 65 or 66 for the cohorts for whom employment around the retirement age is observed-increased for all groups, but was low in absolute terms and compared to employment levels at younger ages. Employment was very low after age 70. For older native-born cohorts, employment during their 50s shows a discontinuity; for age-specific employment, this discontinuity is observed in the 1970s, with a jump occurring in 1978. This is due to the change in the rules for earning of quarters of coverage, as described in the data section. The effect is less clear for the foreign-born. Results adjusted for this effect are discussed in the Supplementary Materials. For foreign-born individuals and native-born females, employment was higher overall in the younger cohorts, especially between age 50 and age 60. For females, increases in employment at these younger ages had stalled in recent years; while for the foreign-born, increases can be seen up to the last year observed . For native-born males, by contrast, the cohort profiles seem to have tilted to the right over time, that is, the older cohorts had higher levels of employment than the younger cohorts between ages 50 and 55, while this pattern reversed starting at age 62 or 63. This means that the older cohorts worked more at younger ages, while the younger cohorts caught up at older ages. These cohort profiles and age-specific employment levels clearly show the effects of the nominal retirement age. For instance, for the cohort born in 1920, we see two breaks in the schedule: from age 61 to 62 , and from age 64 to 65 . Between these breaks in the schedule, there was a rapid decline in employment. From age 65 onward, employment was low, and declined slowly but steadily. For the younger cohorts, we also see a break between ages 61 and 62, but the decline that followed was becoming less steep. Moreover, the break between ages 64 and 65 was shifting to age 66. --- Working Life Expectancy at Age 50 The results for WLE at age 50 by cohort are shown in Figure 4 and Table 1. Table 1 shows WLE and confidence intervals for the total population by gender for selected cohorts, and by gender and nativity. The results for the total population closely follow those for the native-born, as the native-born make up at least 75% of the sample for all cohorts. Figure 4 displays the results for all cohorts. The confidence intervals are not shown, as they are rather close to the point estimates, which is not surprising given the large sample size. The solid line in Figure 4 shows WLE at age 50 for the completely observed cohorts . The dashed line shows results that are based on the extrapolation approach described in the methods section. For the native-born male cohort of 1920, we find a WLE of 9.8 years. Up to the 1941 cohort, WLE at age 50 increased by 1.5 years, to reach a total of 11.3 years. The forecasted results are only slightly above or below this number. They show some ups and downs, which reflect the differences in the partially observed working trajectories. The highest predicted WLE for native-born males is for the 1947 cohort, with a value of 11.6 years. For the cohorts born later than 1947, WLE declines. For the 1965 cohort, WLE is 11.1 years, which is roughly the same level as the WLE for the 1933 cohort. The WLE at age 50 of native-born females increased considerably, and is predicted to level off after a small additional increase, as among recent cohorts, employment has stagnated at ages 60 and under. Specifically, the nativeborn females of the 1920 cohort had a WLE of 6.5 years, or 3.3 years lower than the WLE of their male counterparts. For the native-born females of the 1941 cohort, the WLE was 9.7 years. Thus, these women were catching up to the men, with the gap narrowing to 1.6 years. The WLE of native-born females is predicted to further increase to 10.7 years for the 1951 cohort, and then to slowly decline to 10.4 years for the 1965 cohort. For all of the cohorts, foreign-born males and females had considerably lower WLE at age 50 than native-born males and females. Among the cohorts born from 1920 to 1941, WLE changed little for foreign-born males, and increased more than 5.5 years only for the foreign-born females of the 1938-1941 cohorts. This finding implies that the gap between the foreign-born and the native-born individuals increased slightly for males, and increased substantially for females. Specifically, the foreign-born females of the 1920 cohort had a WLE of 5.1 years, which was 1.4 years lower than the WLE of the native-born females. For the 1941 cohort, the WLE was 5.9 years for the foreignborn females and 9.7 years for the native-born females, which translates to a gap of 3.8 years between the nativeborn and the foreign-born. The WLE of foreign-born males and females is predicted to increase. For instance, for the 1965 cohort, the WLE is predicted to amount to 9.6 years for foreign-born males and 8.7 years for foreign-born females, compared with 11.1 years and 10.4 years for their native-born counterparts, respectively. While the gap between native-born and foreign-born individuals appears to be narrowing, it is unlikely to close. --- Discussion --- Main Findings and Insights Based on a large sample of administrative data, we present two major sets of substantive findings. The first set of results relates to working trajectories by age, and the second set is about the length of working life. Studying working trajectories and comparing older and more recent cohorts, we see that for native-born men, working trajectories have shifted toward employment at older ages, with employment at older ages increasing and employment at the younger part of our age range decreasing; that for native-born women, employment has increased at all ages; and that for foreign-born males and females, employment has increased overall. With respect to the length of working life, we observe that working life expectancy at age 50 has been increasing among the native-born, but might have reached a peak, and could level off in the future; while for the foreign-born, the duration of working life has remained mostly stable at a comparatively low level, but might increase if recent employment conditions prevail throughout the life course. Two key insights emerge from our findings. First, differentials in age-specific employment by gender and nativity accumulate over the life course and lead to substantial differentials in the length of working life. Second, recent increases in employment at older ages do not necessarily translate to increasing WLE. Overall, our findings show that the future development of the length of working life should be a concern for policy-makers. --- Findings on Working Trajectories The decrease in employment we found for native-born males below age 55 is in line with trends in labor force participation for that age group, and is attributable to the effects of the 2007/2008 economic crisis . The increase in employment at ages 55+ might partly reflect changes in retirement age, as the retirement age has increased to 66 for the cohorts born from 1943 to 1954. For native-born females, we found a constant increase in employment over the whole age range, in line with the general trend of increasing female labor force participation . Unlike for their male counterparts, there has been no decline in employment at ages below 55 for native-born females, possibly because women were less affected by the 2007/2008 recession . One potential explanation for the shifting age patterns in employment of the foreign-born is that the composition of this population has changed with respect to the country of origin. Whereas in 1960, the majority of the immigrant population in the United States were of European origin, by 2010, the largest immigrant group came from Latin America . Foreign-born Hispanics, who make up the largest share of immigrants from Latin America, have lower levels of educational attainment and English proficiency than their native-born counterparts , which might hinder their labor market outcomes. A finding that is consistent across groups and irrespective of gender and nativity is that employment at ages older than the full retirement age has increased, but has remained low in absolute terms. After age 67, employment quickly drops to low levels, and is negligible after age 75. For instance, for the native-born men in our study, ages 65+ contributed 9% of WLE for the 1920 cohort, and 12% for the 1941 cohort, consistent with earlier findings on postretirement employment . While postretirement employment is not uncommon, it is often part-time or for a limited period only . As expected, we found that the contribution of ages 65+ to WLE was larger for the foreign-born than for the native-born: for the foreign-born males, employment beyond retirement age contributed between 13% and 18% of WLE . --- Findings on the Length of Working Life For the cohorts born from 1920 to 1941, for whom we fully observed WLE up to age 74, we found that WLE had changed very little, except among native-born females. Given the age-specific trajectories, these trends in WLE are not surprising, as age-specific employment increased considerably for native-born females. Additional analyses not reported here show that the effects of changes in mortality on these trends are rather small. Compared to other countries, the United States has a high WLE at age 50. For instance, Leinonen and colleagues reported that the WLE at age 50 of the 1938 cohort in Finland was 7.0 years for males and 7.3 years for females. The corresponding values for native-born U.S. males and females born in 1938 were 10.6 years and 8.7 years. Our forecasts of WLE for the cohorts born in 1942 and later indicate that for the native-born, a peak might have been reached or will be reached soon, despite policy efforts to increase the length of working life, and despite increasing employment rates at older ages. For native-born men, this finding is mainly driven by the decrease in employment below age 55 discussed above. This suggests that a future increase in WLE-or even constant WLE levelsshould not be taken for granted. For the foreign-born, on the other hand, the outlook is more optimistic, as the gap in WLE between this population and the native-born population might narrow. It is, however, unlikely to close. --- Methodological Considerations The finding that WLE has been increasing or has remained constant for completely observed cohorts is in contrast to recent findings based on the period perspective, which showed no increase in WLE at age 50 and strong year-toyear fluctuations . This is not surprising, as previous studies have shown that results from the cohort perspective differ from results from the period perspective , and we found similar discrepancies between the cohort and the period perspective in some of our additional extrapolation scenarios described in the Supplementary Materials. While results from the period perspective might be more timely, they tend to exaggerate the conditions that prevail over a period of 1 year or a few years. The findings for completely observed cohorts presented here describe patterns that people have actually experienced. The extrapolation approach we use to complete working trajectories leads to rather robust findings. Specifically, for the cohorts born later than 1941, our estimates of WLE are partly based on the extrapolation approach by Leinonen and colleagues , and the underlying assumption that incomplete working trajectories can be forecast by borrowing from the experiences of older cohorts. To assess the sensitivity of our results, we applied several alternative extrapolation methods. For instance, we forecasted WLE from the period perspective based on recent changes in employment. The alternative procedures all led to findings very similar to those reported in the results section, and to similar conclusions. Details are presented in the Supplementary Materials. When interpreting our results, it is important to keep in mind that they are based on trajectories associated with Social Security numbers . This creates three potential challenges. First, individuals who never applied for an SSN are not included in our data. Second, some individuals might have applied for and been issued several SSNs, across which the working trajectories of these individuals could be split. Third, outmigration is not captured in our data, which might be especially problematic when analyzing the foreign-born population. The first point-that is, that individuals who never applied for an SSN are not included in the data-is likely to be a negligible issue. Since only a small minority of the population does not apply for an SSN, their inclusion in the data would likely change the results only a little. Regarding the second point, individuals with multiple SSNs are flagged in the data, and we adjust our analysis accordingly. Third, outmigration is not captured. While living outside of the country, individuals might not have earnings in the United States, but could have earnings abroad. But since these earnings do not appear in the data, it may appear as if these individuals were not employed, when in fact they were. Dudel, López Gómez, Benavides, and Myrskylä found that the effect of a similar issue in Spanish social security data was modest. To deal with this potential issue, we used data from the Human Mortality Database and remove excess survivors from the CWHS data . We also conducted several alternative adjustments and robustness checks. These analyses led to findings that are qualitatively similar to our main results, and to similar conclusions. Thus, it appears that our results are rather robust, especially with respect to differences between the native-born and the foreign-born, and with respect to trends over time. These analyses are also discussed in the Supplementary Materials. --- Supplementary Material Supplementary data is available at The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences online. --- Conflict of Interest None reported.
Little is known about the length of working life, even though it is a key indicator for policy-makers. In this paper, we study how the length of working life at age 50 has developed in the United States from a cohort perspective. Methods: We use a large longitudinal sample of U.S. Social Security register data that covers close to 1.7 million individuals of the cohorts born from 1920 to 1965. For all of these cohorts, we study the employment trajectories and working life expectancy (WLE) at age 50 by gender and nativity (native-born/foreign-born). For the cohorts with employment trajectories that are only incompletely observed, we borrow information from older cohorts to predict their WLE.The length of working life has been increasing for the native-born males and females, and the younger cohorts worked longer than the older cohorts. However, WLE might soon peak, and then stall. The gap in WLE between the nativeborn and the foreign-born has increased over time, although latter group might be able to catch up in the coming years. Discussion: Our findings show that studying employment from a cohort perspective reveals crucial information about patterns of working life. The future development of the length of working life should be a major concern for policy-makers.
Introduction The proportion of elderly in the population is rising globally with 22% expected to be in the ≥60 years age group by 2050 [1]. Although conventionally defined as people aged 65 years and above [2], in many low-and middle-income countries , including Sri Lanka, people aged 60 years and over are considered to belong to the elderly age group [3] due to differences in pre-defined retirement age, life expectancy, lifestyle, and overall health status. In Sri Lanka, between 2001 and 2019, the proportion of citizens 60 years of age and above rose from 9.9% to 12.3% [4]. By the year 2041, it is estimated that over a quarter of the country's population will be above 60 years [5]. With aging comes a multitude of physical and physiological changes, which not only bring new ailments but also aggravate existing medical and psychological problems [6]. A review of the literature revealed that depression and cognitive impairment are among the commonest psychological problems faced by the elderly [1] and that functional decline and disability increased not only with advancing age but also with the presence of co-morbid medical illnesses [7]. In addition, various social factors shape the unique challenges the elderly encounter, including living conditions, loneliness, financial difficulties, their perceived status in the community, lack of psychosocial support, and limited availability of specialized services. Together, these challenges can be detrimental to elders' health and quality of life [8], the effects of which could be augmented in the presence of a multiplicity of health problems [9]. Little is known about the mental well-being of the elderly in northern Sri Lanka. A multi-center study conducted in the Northern Province [10] found the prevalence of depression among elders attending primary healthcare centers to be 11.6%. A more recent study in the north [11] among users of primary healthcare centers yielded a much higher prevalence of depression of 41.6% and found old age to be a significant contributing factor. We identified only one published study [12] on cognitive impairment, in a clinical setting, among inward patients in the geriatric age group at a tertiary care center, where the prevalence was found to be 67.4%. Despite a community-based study conducted in southern Sri Lanka [13] over two decades ago yielding a prevalence of disability of 20% among the elderly, and wide recognition that medical illness brings about a decline in functional ability [14], recent studies on disability among the elderly with medical illness are lacking. Elders in northern Sri Lanka have been through three decades of armed conflict. Home to a predominantly Tamil-speaking population, the region has undergone a noticeable demographic and social transformation during the post-war years due to internal and external displacement, migration, disappearances, and death [15]. The civil war has left many socially and economically unstable due to the loss of property and livelihoods, changed family and community dynamics [15], and eroded community support systems [16]. Migration and death of younger cohorts have resulted in a scarcity of formal and informal caregivers [17]. This protracted conflict has left a great deal of trauma as its legacy, with multiple studies throwing light on its adverse mental health implications [18,19]. In Jaffna, the most populated district in northern Sri Lanka, elders comprise about 15% of the local population [20]. A recent survey found that 55% of the elderly in Jaffna have a monthly income below the national poverty line [17]. At present, a substantial proportion of elders live alone, with or without their elderly spouses, in a delicate equilibrium. This equilibrium is constantly under threat with various factors including financial constraints, lack of mutual care giving, and loss of loved ones, converging on their physical, mental, and social well-being. In addition, the social isolation and sedentary lifestyle of the elderly in this region may also contribute to a decline in cognitive status [21]. Amid an epidemiological transition, the elderly population in Jaffna is increasingly affected by noncommunicable diseases . Despite an abundance of literature suggesting that elders with medical conditions are more vulnerable to depression, disability, and cognitive impairment [9,14], potential impacts of medical illnesses on the elderly in the north and elsewhere tend to go unnoticed, even as physical aspects of NCDs are identified and treated. Studies among elders with multiple morbidities in community and clinical settings in other countries reveal that depressive symptoms are frequently missed and untreated [22]. Without a well-established system of family practice [23], elders in Sri Lanka mostly seek medical care in government hospitals, where free health services are available, with a smaller fraction approaching the private sector for healthcare. Given the high levels of poverty in the north, most elders have access to care for medical illnesses only at government hospitals. With bed strength of 1,310, Teaching Hospital Jaffna is the largest and only tertiary health care facility with multiple specialties in the Northern Province, catering to many elders with medical problems in Jaffna and other northern districts. The prevalence of depression, disability, and cognitive impairment among elders with medical illnesses in Jaffna is not known. Addressing this gap, this study aims to describe the prevalence and correlates of depression, disability, and cognitive impairment among elderly patients attending the medical clinics at THJ. --- Materials And Methods This institution-based descriptive cross-sectional study was conducted in the medical follow-up clinics at THJ, Sri Lanka, among patients of all genders, aged 60 and above, attending clinics either for the first time or for a follow-up visit. The exclusion criteria included severe communication problems as a result of conditions such as hearing or speech disability, acute or degenerative neurological diseases, active hallucinations, and formal thought disorders. Data were collected from all patients who met the study criteria during a four-month period in the midst of the COVID-19 pandemic. A total of 166 patients were approached, of whom 122 responded. An interviewer-administered questionnaire designed by the research team was used to collect basic details regarding the participants, including age, gender, religion, ethnicity, educational level, marital status, living arrangement, distance traveled to come to the clinic, illnesses for which they were under follow-up, and whether they had sought professional help for depression, cognitive impairment, or any functional disabilities. The questionnaire also contained the 15-item Geriatric Depression Scale [24], 12-item World Health Organization Disability Assessment Schedule 2.0 [25], and the Montreal Cognitive Assessment [26]. The 15-item GDS is a tool that has been tested and extensively used among the elderly and is easy to administer to those with impaired cognition. The 12-item WHO DAS 2.0 [25] is internationally validated and has been used widely. The Montreal Cognitive Assessment possesses high sensitivity and specificity for detecting mild cognitive impairment and has been translated into Tamil, the local language, and validated. The GDS and WHO DAS 2.0 were translated from English to Tamil, and judgmental validation was done by an experienced psychiatrist. Cut-offs to categorize depression, disability, and cognitive impairment were drawn from the literature. The scores of the Geriatric Depression Scale were categorized as normal , mild depression , moderate depression , and severe depression [24]. In the WHO DAS 2.0 scale, each of the 12 items was scored from 0 to 4, where 0, 1, 2, 3, and 4 represented no, mild, moderate, severe, and extreme/complete difficulty, respectively, in the relevant activity; overall scores were categorized as no disability , mild disability , moderate disability , and severe disability [25]. Scores of the MoCA scale were categorized as normal and abnormal [27]. Data were analyzed using Statistical Package for Social Sciences, version 25 . Prevalence and levels of depression, disability, and cognitive impairment were determined by the scores obtained from the scales, based on the cut-offs. Mean differences in depression, disability, and cognitive impairment scores by sociodemographic factors were measured using independent t-test and ANOVA, as appropriate. Correlations between depression, disability, and cognitive impairment were measured using Pearson's correlation. Ethics approval was obtained from the Ethics Review Committee of the Faculty of Medicine, University of Jaffna . --- Results Among the 166 elders approached, 122 responded, making the response rate 73.5%. The sociodemographic characteristics of the participants are tabulated in Table 1. Most of them were from Jaffna. The mean age of the participants was 68.3 years , with a median of 68 years =63-72). Among the participants, 58 were males and 64 were females. The prevalence of depression, disability, and cognitive impairment in the sample is given in Table 2. The overall number of participants with depression was 54 , while those with disability were 117 and those with cognitive impairment were 98 . The coexistence of depression, disability, and cognitive impairment in the sample is shown in Figure 1, indicating that all participants had at least one of the three conditions. In the sample, 66 participants were followed up at the medical clinic for one illness, while 40 were followed up for two illnesses and 16 for three or more illnesses. The most common illnesses the participants had were diabetes mellitus , followed by hypertension , dyslipidemia , ischemic heart disease , and bronchial asthma . Only 12 participants had previously sought professional help for depression, disability, or cognitive impairment. --- Characteristics An analysis of correlations among depression, disability, and cognitive impairment as well as correlation between the number of entities and the number of medical illnesses the participants had elicited interesting results. Depression and cognitive impairment and depression and disability were significantly correlated but not cognitive impairment and disability . We also found a significant correlation between the number of medical illnesses for which the participant was followed up at the medical clinic and the number of assessed entities present . The distribution of the level of depression, disability, and cognitive impairment by sociodemographic factors is available in the Appendix, Tables 67891011. Compared to the proportion of old-elderly , the proportion of young-elderly with moderate-severe depression was greater. The prevalence of moderate-severe depression among females was almost double the prevalence among males . More than half of those living alone had moderate-severe depression. Over three-fifths of those living with their spouse had no depression. When considering the level of disability, more women than men had severe disability. Cognitive impairment was more prevalent among those with primary education or less compared with elders who had at least secondary education . Inferential analysis showed statistically significant differences in mean GDS scores based on gender, marital status, and living arrangement reported being satisfied with life, in good spirits and happy , most of the time during the past week. However, almost two-thirds responded that they prefer to stay at home, and 50.8% felt that they have more problems with memory than most. With respect to WHO DAS 2.0, while over 90% of participants reported not facing any difficulties with maintaining personal hygiene, a substantial proportion agreed to some level that they had difficulty in standing for long periods , walking 1 km or more , learning a new task , and taking care of household responsibilities . Three-quarters of the participants admitted to have been emotionally affected by their health problems. On analyzing the MoCA scores, we observed that the least number of participants had problems in the Naming and Orientation domains, with only two scoring less than the full score in Naming and 12 scoring less than the full score in Orientation. The domain affected most was the Delayed recall domain, in which only four participants scored the full score. In fact, one-third of the sample scored zero in this domain. Other domains in which participants did not perform well included, Abstraction and Visuospatial/Executive domains, in which 19.7% and 18.05%, respectively, scored zero. --- Discussion The study sample reflected the age distribution and ethnic makeup of the Jaffna district [20]. Other sociodemographic characteristics, in particular, the lower proportion of old-elderly in the sample and over three-quarters having at least secondary education and not living alone , are critical to note, as they may have impacted the study findings. --- Prevalence of depression, disability, and cognitive impairment This study revealed that 44.3% of the study participants had depression, with 26.2% having mild depression, 12.3% having moderate depression, and 5.8% having severe depression. This is much higher than the 11.6% recorded among elders in a community-based study conducted in northern Sri Lanka [10], but on par with the 41.6% reported in a more recent study among adults seeking care at primary healthcare institutions in the Northern Province [11]. Studies from South Asia report similar prevalence rates of old-age depression [28,29], while other international studies report lower rates [9,30,31]. In the present study, the higher prevalence of depression may be explained by the fact that our sample, recruited from medical clinics, had multiple comorbid medical conditions [1], as compared with community-based studies [9,30]. Using WHO DAS 2.0, we found very high rates of disability among elders , among whom 57.3% had a severe disability, as compared with 20%, the rate of disability reported in an earlier study in Kandy [13]. WHO DAS 2.0 defines disability in terms of cognition, mobility, self-care, getting along, life activities, and participation [25]. The Kandy study measured disability in relation to impairment of activities of daily living and instrumental activities of daily living . Several other community-based studies [7,32] also describe a lower prevalence of disability in the elderly. Many of these studies assessed ADLs and IADLs using different scales. However, a study conducted in India [33], which used the WHO DAS 2.0 36-item version, found the prevalence of disability to be similar to our study, with 92%-100% of women and 82%-99% of men having disability. When the MoCA scores were analyzed, we found that 80.3% of study participants had cognitive impairment. This is higher than the prevalence rate of 67.4% in a prior study among hospitalized elderly patients in Jaffna [12] that used the Concise Cognitive test. It is also higher than the value obtained from a hospital-based study in Colombo [34] that used Mini-Mental State Examination . While differing scales may explain the lower prevalence identified in these two hospital-based studies, it is interesting to note that the value obtained in the present study was much higher than the prevalence of mild cognitive impairment in a community-based study in Jaffna [35] using MoCA. This strongly suggests that comorbid medical illnesses may negatively impact cognition in the elderly. Other studies which did not explore comorbidities report much lower rates of cognitive impairment [36][37][38]. Taken together, the prevalence of depression, disability, and cognitive impairment was all higher than the figures obtained from prior studies in the region and country. Indeed, we found that all participants had at least one of the three conditions assessed, depression, disability, and cognitive impairment, and less than 10% of the sample had sought professional help for them. While the panic created in the wake of the COVID-19 pandemic, along with the travel restrictions and resulting social isolation, could have had an impact on the mental well-being of the elderly [39], the question remains whether the pandemic alone could explain the high prevalence rates gleaned from this study or whether they are linked with specific medical or lifestyle-related factors among elders with chronic medical illnesses, a neglected area of research in Sri Lanka and other LMICs. --- Correlation between depression, disability, and cognitive impairment The results of this study indicate that functional disability and cognitive impairment are significantly correlated with depression. Similar correlations were reported in a 2014 study in the Northern Province [10] that explored various parameters among adults visiting primary care facilities. A nation-wide survey in South Korea [40] on risk factors for late-life depression showed that cognitive impairment, as revealed by low MMSE scores, was associated with a higher risk of depression. The Aging, Demographics, and Memory study [41] found that the prevalence of depression was high in those with cognitive impairment when compared to those with normal cognitive status. Similar associations have been reported in other studies around the world [36,42], suggesting that these conditions are very much interlinked and must be approached in tandem. --- Differences in sociodemographic factors In the present study, depression, disability, and cognitive impairment did not differ by age group. Several studies from around the world [38,43,44] suggest that cognitive impairment worsens with age. Although the mean MoCA score of the old-elderly was lower than that of the young-elderly, signaling a higher level of cognitive impairment in the older age group, this difference was not statistically significant . Further, despite studies showing that depression [30] and disability [7] increase with age, mean GDS and WHO DAS 2.0 scores were not statistically different in the young-and old-elderly . These results may be influenced by the makeup of the sample, which comprised very few old-elderly. Mean GDS score was higher among females than among males . This preponderance of depression among females has also been noted in studies carried out locally and internationally [30,45]. While female preponderance in the prevalence of cognitive impairment has been noted in local and international studies alike [12,44], the present study elicited a slightly lower mean score of MoCA among females than among males, a difference that is not statistically significant . Similarly, although numerous studies show that females are affected more by disability, it was not reflected in this study . These contradictions need further exploration. Higher levels of educational attainment and cognitive activity are shown to be protective against Alzheimer's disease, a form of dementia [46]. As mild cognitive impairment is known to have a risk of progressing to dementia [46], factors that have a protective effect against dementia are also protective against cognitive impairment. This was reflected in our study as those with at least secondary education were found to have a higher MoCA mean score, and thus less cognitive impairment, than those with primary education or less . Similar findings are reported in several other settings locally [35,47] and internationally [36,43]. In addition, this study reveals that those with primary education or less had significantly higher WHO DAS 2.0 scores than those with at least secondary education , a result corroborated by a study in a community setting [32]. These differences may be explained by the fact that education level is also a marker of socioeconomic status ; disability is known to be associated with SES in the elderly [48]. Although several studies [40,45] have elicited a significant association between educational level and depression, it was not reflected here. In this study, a significant difference of mean GDS scores was found according to the participants' marital status, with those in marital life having a lower GDS score than others. This has been corroborated by other studies [49,50] which reveal a significant association between widowhood and depression. Interestingly, this study also reveals that those in marital life had a significantly lower WHO DAS 2.0 score than others , a finding that is not corroborated in the literature. This might, in part, be due to the fact that most studies used a tool other than WHO DAS 2.0 to assess disability. However, WHO DAS 2.0 measures disability not only in terms of mobility and activity level but also in terms of cognition and psychological aspects such as getting along and participation. As marriage is known to improve psychological well-being [51], that elders in married life had lower levels of disability based on WHO DAS 2.0 is not surprising. Our study did not reveal a statistically significant relationship between cognitive impairment and marital status. Apart from one's spouse, the degree of structural support or integration in a social network is said to have a direct positive effect on well-being [52]. This study reveals that those living alone had a significantly higher level of depression when compared to those living with their spouses and/or children , corroborated by a study conducted in South Korea [40]. Similar differences by living arrangement were not identified in relation to disability or cognitive impairment . --- Cultural dimensions Despite over 80% of the sample claiming to be happy and being satisfied with their life in the GDS, over four in 10 participants were found to have depression and around three-fourths of the sample admitted that they had been emotionally affected by their health in the WHO DAS 2.0. Studies show that alexithymia is highly prevalent in South Asian cultures, in which positive emotions are expressed readily, while negative emotions are suppressed [53], perhaps explaining these conflicting results. On the other hand, such contradictions may be explained by differing understandings and expectations of aging that may prevail in the community. With over four-fifths of study participants having cognitive impairment, and with almost all the participants having issues with delayed recall in MoCA, the study findings support the claim that South Asians tend to view memory loss as part of normal aging [54], which may lead to delayed help-seeking. This may impede early intervention to halt the progress of dementia, ultimately constituting a burden for the patients, their caregivers, and even the healthcare system [54]. As with any study, this research comes with limitations. Due to the sudden escalation of COVID-19 spread and the associated health guidelines, movement restrictions, and change in healthcare practices, the number of patients attending medical clinics fell during the data collection period, compromising the sample size. This study would have benefited from a community-based design, but this was not possible during the pandemic. --- Conclusions Depression, disability, and cognitive impairment are common among the elderly attending medical clinics in Jaffna, northern Sri Lanka. While these conditions often coexist, they are mostly untreated. The study highlights the need for guidelines and protocols to actively screen for these conditions at medical and, more importantly, primary care facilities, to initiate early intervention to improve quality of life. While research in this area is much needed, the latter should pave way for a comprehensive policy giving due importance to mental health and disability among the elderly in Sri Lanka. --- Appendices Question No. --- GDS question
The rising proportion of the elderly is increasingly affected by non-communicable diseases. Despite an abundance of literature suggesting that elders with medical conditions are more vulnerable to depression, disability, and cognitive impairment, these tend to go unnoticed and unaddressed. This study describes the prevalence and correlates of depression, disability, and cognitive impairment among elders with medical illnesses attending follow-up clinics in a tertiary care hospital in northern Sri Lanka. Methods: This descriptive cross-sectional study was carried out among 122 elders (≥60 years) attending medical clinics at Teaching Hospital Jaffna. Depression, disability, and cognitive impairment were assessed by the 15-item Geriatric Depression Scale, 12-item World Health Organization Disability Assessment Schedule 2.0, and Montreal Cognitive Assessment, respectively. Student's T-Test, ANOVA, and correlation coefficient were used in analyzing data using Statistical Package for Social Sciences 25 (SPSS-v25) (IBM, New York, United States). Results: The mean age of the participants was 68.3 years (SD=5.70); 58 (47.5%) were males and 64 (52.5%) were females. Prevalence of depression was 44.3% (95% CI=35.5-53.1), while disability was 95.9% (95% CI=92.4-99.4) and cognitive impairment was 80.3% (95% CI=73.2-87.4). Depression was significantly associated with gender (p=0.013), marital status (p=0.019), and living arrangement (p<0.001). Cognitive impairment was significantly associated with education level (p=0.045), and disability was associated with education level (p=0.008) and marital status (p=0.027). Among the study participants, only 12 (9.8%) had previously sought professional help for depression, disability, or cognitive impairment. Conclusion: Depression, disability, and cognitive impairment are common among the elderly attending medical clinics in Teaching Hospital Jaffna, and are, in most cases, unaddressed.
Introduction Gender-based violence is violence directed towards other people and is subject to various conceptual interpretations and contextual applications. 1 Gender-based violence is founded on gender inequality with most victims being women. 2 It is a serious public health and human rights issue, and its manifestations are classified according to emotional, physical, social, psychological, sexual, economic and domestic forms. 3 It encompasses intimate partner violence , which is sexual or physical violence committed by a current or previous partner after the age of 15 years. 3 The World Health Organization states that males are more likely to perpetrate GBV while women and girls of all ages are victims. 3 Globally, by the end of 2021, IPV resulted in at least 137 femicides daily. 4 In 2019, at least '243 million women and girls aged 15-49 across the world' were victims of GBV and suffered from the mental, physical, spiritual, sexual and/or reproductive health aftermaths. 5 In the South African context, there was a total number of 224 912 general crimes against children and women , during the 2018 and 2019 periods alone. These statistics were the highest in the world, characterising South Africa as 'the rape capital of the world'. 6 In Limpopo province, studies by the Thohoyandou Victim Empowerment Programme have established that the Vhembe District Municipality has the highest reported cases of domestic violence in Limpopo province. 7,8 Statistics on GBV in South Africa illustrate its high prevalence with the general public calling incessantly for interventions to prevent and mitigate GBV. As such, this study is based on the problem of GBV and is founded on the effects of GBV in the VDM. Several factors have been alluded to the perpetration of GBV. In a study conducted in 12 African countries, the authors concluded that absence of laws on GBV, alcohol consumption, male http://www.sajpsychiatry.org Open Access dominance, women's attitudes after the perpetration of GBV and their empowerment predict GBV. 9 In a systematic review, Van Daalen et al. concluded that GBV was related to food insecurity, economic hardship and disruption of infrastructure because of extreme weather conditions. 10 The review by Van Daalen et al. further observed that GBV is related to harmful cultural or traditional practices against women such as early marriages. 10 The authors notice the cyclical effects of GBV by highlighting that GBV results in women transferring the violence and anger towards children. 10 Several studies explored the mental health issues and behavioural disturbances among victims of GBV. A survey with 273 respondents, conducted in Australia, concluded that GBV results in a complexity of mental health challenges that include social isolation which worsens the effects of GBV, as victims are unable to seek help and reduce occurrence of GBV. 11 In a narrative review of literature, in the United States, GBV was also associated with increased childhood exposure to trauma. 12 In Africa, a survey with 209 women in Kenya concluded that GBV resulted in anxiety, depression and posttraumatic stress disorder in women and girls. 13 In another survey with 283 respondents conducted in Kenya, the authors found that GBV was associated with poor mental health, highrisk sexual behaviour and sexually transmitted infections. 14 In addition, GBV resulted in disordered alcohol usage among women. 14 A Nigerian study revealed that 31% of the participants agreed that women suffered GBV because they were viewed as 'inferior to males, incompetent and worthless'. 15 This Nigerian study further concluded that women were not allowed to associate with male relatives or male friends. 15 On this issue of social isolation, South African statistics reveal that most women did not report GBV while 40% reported to law enforcement. 16 Also, in South Africa, a longitudinal study with a sample size of 415 participants found that GBV results in depression and suicidal ideation. 17 These issues highlight the seriousness of GBV, should prompt society to respond to these victims and increase awareness on the need to prevent GBV. Therefore, the purpose of the study is to explore the psychosocial effects of GBV among women in Vhembe district. --- Research methods and design This study used a qualitative approach to explore the psychosocial effects of GBV among marginalised women in Vhembe district, Limpopo province. --- Study design This study opted for the phenomenological research design to understand the phenomenon of GBV as constructed by the participants themselves in their own familiar ecological surroundings. 18 Phenomenological designs enable the search and establishment of knowledge, truth or reality of a phenomenon as socially constructed products of the affected individuals' experiences and perspectives concerning that very phenomenon. 19 The study utilised an interpretivist paradigm that acknowledges that the culture and context are different among research participants and seeks to interpret such subjective experiences among women experiencing GBV in Vhembe district. 18 --- Setting The researchers conveniently selected to undertake the study in the N'wamatatani and Hlanganani rural informal settlements located in the VDM, one of the five districts in Limpopo province. The VDM has a population of about 1.385 million residents. 20 The district is largely populated by the Venda, Tsonga, Bapedi and Afrikaners. 20 The dominant Vhembe district languages are Tshivenda and Xitsonga, followed by Sepedi, Afrikaans and other minority languages of migrants Zimbabwe and Mozambique. 20 --- Study population and sampling strategy The study population consisted of women victims of GBV in the study setting. Purposive sampling was conducted whereby participants known to the Department of Social Development Area Social Worker who worked with female GBV victims were invited to participate. The inclusion criteria included all females aged 19-35 years who directly experienced GBV; women experiencing GBV who were residents of N'wamatatani and/or Hlanganani informal settlement in the VDM; and women who were willing to participate voluntarily and be audio-recorded through telephonic interviews. The study included all women till data saturation, whereby no additional new information could be solicited from participants upon further interviewing more participants . --- Data collection The principal investigator conducted the 15 semistructured, in-depth interviews telephonically, each lasting 30 min-45 min. Open-ended questions were used during data collection, which enabled the researcher to implement probing questions and to elicit participants' spontaneous or unhindered responses on GBV. The principal researcher asked the participants questions and probed for clarity in cases of misunderstanding and for more insight, despite the telephonic interviews that were performed in adherence to affiliated institutions' coronavirus disease 2019 guidelines at the time of the study. For all these telephonic interviews, the participants were requested to ensure that they were at a quiet place that would not cause distractions to them. They were informed that arrangements had been made with the Area Social Worker in case they needed further interventions or assistance during the interviews because of questions that could disturb them emotionally and/or psychologically. All 15 participants were interviewed within a space of 5 days between 15 March 2021 and 19 March 2021. Three participants were interviewed on each day. The researcher sought clarity from the participants concerning responses that the researcher did not understand. The interviews were conducted in Xitsonga and English, as it was easy for the participants to freely express themselves. http://www.sajpsychiatry.org Open Access --- Data analysis Thematic analysis was applied in this study, founded on the procedure proposed by Braun and Clark in 2006. 19 This involved all English interviews being transcribed, with the Xitsonga interviews firstly being translated and then transcribed to English. To ensure rigour of the translated data, the researcher checked the correctness of the transcriptions by asking a participant who could speak both Xitsonga and English to verify the correctness of the transcripts. The transcripts were examined for themes, using Braun and Clark's six steps for thematic analysis. 19 These steps included generating codes from the initial data collected, searching for themes in the transcribed data, reviewing the themes which had been found, and finally defining and naming the themes. 19 The themes generated from the analysis were corroborated by the participants to ensure trustworthiness. Following this corroboration, the themes were presented in a narrative format. The researcher ensured the study's trustworthiness by accurately recording all procedures taken during the study to enable auditing and verifying the results of the study with the study participants. 19 The researcher reflected on their role as a social worker who works with women victims of GBV and maintained objectivity as participants shared their experiences through use of an interview guide to ensure that questions asked were related to the research, thus minimising the researcher imposing their beliefs. --- Ethical considerations Ethical clearance for this study was approved by the College of Human Sciences Research Ethics Review Committee at the University of South Africa. Prior to the commencement of data collection, all the participants signed the consent forms with the assistance of the Area Social Worker. Participants were notified about their right to participate or decline before any involvement in the study. The researcher further explained that their involvement would be in the form of answering the researcher's questions orally. Prior to the commencement of the interviews, the researcher fully disclosed what the study entails and the rights of the participants . --- Results --- Participants' biographical profiles Fifteen female participants formed part of the study, all of whom were black, with 14 South African nationals and 1 Mozambiquan expatriate. They were all aged from 19 to 35 years and Xitsonga speaking and had experienced GBV in the form of IPV. Eight of the participants were married, five were single, one was divorced and one was widowed. Eight of the participants still lived with their partners while seven no longer lived with their partners. All participants resided in informal housing of Hlanganani and N'wamatatani settlements in the VDM where the study was undertaken. --- Key findings From the phenomenological analysis, four main themes emerged that described the psychosocial effects of GBV on women in the Vhembe district. The first main theme was an effect of 'worthlessness' as a result of GBV. The second effect of GBV among women in Vhembe district was 'social isolation'. The third was 'depression' as a consequence of GBV. The last theme was that GBV had a psychosocial effect of causing 'anger towards children' among women in the Vhembe district. The first theme was the issue of 'experience of worthlessness' associated with GBV. The issue of experiencing worthlessness among victims of GBV was expressed by five participants. This response was elicited from the discussion on the topic, GBV, and its emotional effects. The participants described their experiences of feeling worthless without their husbands. From these shared experiences, participants highlighted they felt worthlessness because a bride price had been paid for them or they came from a poor background. Excerpts from participants C and I are as follows: 'First it was emotional and then it escalated to physical violence. I was told that I am nothing without him and there are a lot of things I cannot achieve without him because I am from a poor family. Today I know how to wear a night dress because of him and that I came with nothing to the marriage … I most of the time feel worthless as a woman.' 'I think it is the reason that I stayed in the relationship for too long to a point where my husband realized that I will not leave him. Again, I think he ended up viewing me as his property because he paid lobola or dowry for me … The violence I suffered killed my self-esteem because of being told that I am useless.' The second theme to emerge was 'experiencing social isolation' as a consequence of GBV. This theme was elicited in response to the topic on family/friends being aware of the GBV. Participants A, G, K, L and M expressed that the social isolation was related to insecurities of their partners who suspected infidelity of the women. Participant N similarly revealed she feared her spouse who was insecure and obsessive to the extent of following to her place of work for the purpose of taking her home after work. The following excerpts from participants A and G illustrate the experiences of social isolation: The third theme to emerge was 'depression was associated with GBV'. This theme was also elicited when discussing the http://www.sajpsychiatry.org Open Access topic on emotional consequences of GBV. The participants described how the effects of GBV became the source of their depression. Participant B noticed they became depressed after the GBV and when their partner took away their child. Participant K who was subjected to GBV similarly observed they had become depressed after they found out their daughter had been sexually abused. The theme is evidenced from the following excerpts from participants: 'It destroyed me. I lost myself. There was a time where I felt that I was done with him. However, he spent more time with my family members drinking alcohol even after I left him. My family looked at me as the source of the problems we encountered and view him as an angel. I ended up being depressed after he took custody of my last-born baby. At work it affected me a lot because I would even see case dockets of women who were killed by their partners. I would cry before I go to work.' 'That abuse affected me so much because I always went back to work on with bruises and pain but what shattered me the most was that after the death of my husband my daughter told me that he used to rape her in my absence. I don't know whether it was because she was his step-daughter or what. I was depressed for more than three years because of what he did given the fact that after his death I found out that I was HIV positive.' The fourth theme was that GBV is experienced through 'anger towards children'. This is evidenced by responses from participants A, D and E. The issue of anger and irritability towards children emanated on discussing the topic, on whether participants had children and if the GBV occurs in the presence of the children; if the response was yes, a follow-up question on how the GBV affected the children was asked. Participants described how the GBV resulted in them becoming angry towards their children and would easily shout at them. Participants A and D elaborated that they would shout at the children for no apparent reasons. The excerpts from their responses are as follows: --- Discussion Four themes were discussed, two of which formed part of the modus operandi of the abusive partner and two appeared to be effects of the violence . One participant also expressed concern over anger and abusive behaviour evident in her child, behaviour that she thought was a consequence of witnessing violence against her. These findings are of particular interest as they present the effects of GBV in Vhembe district, which is a population underrepresented in literature on GBV, as well as confirm findings from other studies conducted in other parts of the world. Worthlessness described by the participants may have been part of the abuse as the women were reminded by their abusers that they were nothing, and in some instances, the worthlessness was a sequela to the GBV. These feelings of worthlessness are also supported by a review conducted in 12 African countries, which found that women's attitudes on GBV perpetuate GBV through feelings of worthlessness. 9 Such feelings of worthlessness associated with GBV could also be a symptom of post-traumatic stress disorder described by a narrative literature review in the United States. 15 The participants experienced social isolation that was imposed on them as part of the controlling or abusive behaviour of their partners or as a symptom of depression as they felt they were cut off from social contact. This social isolation is described by Fernández-Fillol et al. 21 who observed that women who experience GBV exhibit signs of post-traumatic stress disorder, which manifests through social isolation. These findings are confirmed by a study conducted in Australia where the authors found that GBV results in social isolation. 11 Similar findings were noticed from a study conducted in Nigeria, where GBV experiences included prohibition from socialisation with family and friends, as women were viewed as worthless. 15 The authors from the Australian study further found that the social isolation made it difficult to provide help to GBV victims. 11 This implication is drawn from the different characterisations of GBV such as financial, emotional, psychological and sexual, which may not be easily visible. 1 Furthermore, the social isolation and consequential silence on GBV victims were evident in South Africa, where only 40% of cases are reported, warrants further studies on interventions that empower women and children to speak up and access psychosocial assistance. 16 The participants shared their experiences of anger that they would direct towards the children. The issue of anger was also confirmed in a systematic mixed-methods review that http://www.sajpsychiatry.org Open Access GBV has negative ramifications for children. 10 The anger towards children concluded in this study necessitates a holistic approach in low-to middle-income countries in the management of GBV, which provides for children of women who are victims of GBV to be cared for. The negative consequences of GBV affecting children could be as a result of difficulty in parenting or poor emotion regulation, which could be asked about in the International Trauma Questionnaire. With regard to social isolation, the finding implies that service providers may experience difficulty in recognising and providing women in need of healthcare or psychosocial support because of the GBV. --- Limitations Despite the research objective being achieved, the study was limited by the methodological approach. The researchers used a qualitative approach with a small sample size, as such results cannot be generalised to all victims of GBV. --- Conclusion The study sought to explore psychosocial effects of GBV among women in the VDM in Limpopo province. The study recognises that GBV is a global psychosocial problem that infringes on the rights of women and relegates women to inferior status in society. The determinants of GBV are varied, and as a consequence, the experiences of women vary across different cultures and communities. The study concluded that in VDM, women experienced psychosocial effects of depression, anger towards children, social isolation and worthlessness because of GBV. From these findings, a holistic approach to prevent and manage GBV is recommended. This approach should empower women to seek assistance mitigating the experiences of social isolation that results in depression and anger towards children, who also need assistance to manage effects of GBV. The study further corroborated several studies that empowerment through women's employment together with a change in norms, attitudes and roles are critical interventions among marginalised women. Further studies in treatment approaches for GBV in rural communities are recommended. --- Data availability Data for the study are available from the first author, R.R. --- Competing interests The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article. --- --- Disclaimer The views expressed in this article are of the authors in their individual capacity and do not express the views or opinions of the affiliated institution.
The phenomenon of gender-based violence is a pertinent social problem in South Africa. The fear of reporting gender-based violence contributes to its continuation, marginalisation and silencing of victims.The study sought to explore the psychosocial effects of gender-based violence among women in Vhembe district. Methods: An exploratory phenomenological research design was used and sampling was performed purposively from a population of women who experienced gender-based violence in a low-resource, rural setting of Vhembe district. Semi-structured telephonic interviews were used as the main method of data collection after permissions and informed consent were sought for conducting the study. Thematic content analysis was applied to convert the participants' statements into a meaningful framework to derive the findings. Results: A total of 15 participants aged from 19 to 35 years participated in the study. Their psychosocial experiences of gender-based violence were depression, worthlessness, social isolation and anger directed towards children.This research confirms that gender-based violence remains one of the most challenging problems associated with mental health problems in Vhembe district. It affirms the need to focus on awareness in rural areas afflicted by patriarchal attitudes, norms and stereotypes. Gender-based violence should to be viewed as human rights violation for victims' protection.The study contributes to the body of knowledge on the experiences of genderbased violence among marginalised women from rural areas.
Résumé Prosopographie, réseaux, séquences de parcours de vie, etc. Quantifier avec ou par-delà Bourdieu ? Cet article se centre sur l'importance de quantifier le « programme de recherche » de Bourdieu, lié aux concepts de champ, d'habitus et de capital. Il présente de possibles manières de faire des statistiques dans ce cadre conceptuel et soutient qu'un continuel développement méthodologique devrait être entrepris en ce sens. Pour étayer cet argument, ce papier met en lumière la méthodologie et les résultats empiriques d'une thèse de doctorat portant sur le champ des sciences économiques en Suisse. Il insiste sur la pertinence d'une stratégie prosopographique et plaide en faveur de nouveaux développements de l'analyse des correspondances multiples, et de l'usage de l'analyse de séquences et de l'analyse de réseaux. La principale contribution de ces méthodes est d'investiguer le profil de sous-groupes dans les champs, de travailler sur les trajectoires d'accumulation et de conversion de capitaux et sur la structure du capital social. Lorsque l'on se pose la question de savoir si l'on doit penser avec ou dépasser Bourdieu alors que l'on entreprend de nouveaux développements méthodologiques dans le cadre de son programme, cet article soutient que l'on doit penser audelà de ses propres écrits, mais toujours dans son cadre conceptuel, qui s'avère particulièrement pertinent pour étudier les relations de pouvoirs entre individus. --- Introduction Pierre Bourdieu's early work on Kabylia was rather ethnographical and qualitative . However, with his growing ambition to turn his theorization of social relations into a "research programme" , Bourdieu increasingly relied on statistics and quantification 2 as a methodology in his studies. One of his most famous statistical ambitions can be found in Chapter 2 of Distinction , where he proposes to quantify the volume and the share of both cultural and economic capital within the entire French social space. He is also well-known for being one of the pioneers in the use of multiple correspondence analysis in sociology, a method which was consistent with his relational and spatial way of thinking. However, some scholars working in a Bourdieusian framework, despite calls for systematization and further developments of quantitative analyses 3 , tend to use only the basic features of MCA, and generally distrust other relational methods such as social network analysis. Nonetheless, these methods, if used properly, constitute a real input to further explore and understand old and new features concerning fields, habitus and capitals. The objective of this article is twofold. First, I argue that in the framework of Bourdieu's research programme, quantitative analyses are more than useful to study the structure of fields and forms of capital, even if Bourdieu wrote several books devoid of any statistics. Indeed, while qualitative methods serve the purpose of conducting ex ante explanatory research and exemplifying particular configurations of a social structure, quantitative analyses permit researchers to describe structural tendencies regarding fields, capitals and habitus 4 . I present possible ways of doing statistics when using Bourdieu's framework in sociological research. Second, since Bourdieu's programme was developed in a particular national context and a particular period of time , it necessarily ignores some recent developments of sociology as well as some topics and methods developed elsewhere. Instead of opposing an alleged Bourdieusian "orthodoxy" of "close readers" of the "master's" work, I argue that Bourdieu's 2 Early uses of statistics can be found in: Bourdieu et alii., 1963;Bourdieu andPasseron, 1979[1964]; Darras, 1966;Bourdieu andDarbel, 1979[1966]. 3 To my knowledge, at least six books focus on the interrelations between Bourdieu's programme and quantitative methods, and try to go beyond Bourdieu's use of quantitative analyses: Robson and Sanders, 2009; Grenfell and Lebaron, 2014; Lebaron and Le Roux, 2015; Coulangeon and Duval, 2013, 2015; Blasius et al., 2019. 4 Quantitative and qualitative methods are to be thought of as complementary. Exploratory ethnography and participant observation, interviews, or historical source reading lead to understanding the specific capital of a field, which in turn leads to the systematic collection of indicators to describe tendencies of capital detention, which finally lead to selection of individuals representing particular fractions of capital detention for further interviews or sources reading. programme should not be strictly limited to only Bourdieu's work, but be part of a larger and more dynamic community, with continuous development. When asking the question of whether or not we should think with or beyond Bourdieu when suggesting new methodological developments to his programme, I argue that we should think beyond his written work , but still within his theoretical framework, which proves to be among the most relevant in the study of power relations between individuals. To do so, I share my own research experience, acquired during my doctorate at the University of Lausanne and which continued until the writing of this paper. It focused on the historical rise and transformations of economics and business studies in Swiss universities. To study the structure of the Swiss scientific field of economic sciences, I relied mainly on a biographical database of all the professors in the twelve Swiss universities at five benchmark-dates , but supplementary data had been collected , for the professors before and between the initial five dates, and until the recent period 5 . The paper is organised as follows. In a first theoretical part, I present Bourdieu's research programme, in particular regarding the concepts of fields, capital and habitus. In the core of this article, I insist on four particular developments regarding Bourdieu and statistics: at the level of data collection strategy, I focus on prosopography; at the level of data analysis, I focus on three methods . I explain the relevance of each method regarding Bourdieu's theoretical framework and give examples of how I used them during my doctoral work. Their main inputs are to identify and investigate subgroup profiles in fields, to analyse trajectories of accumulation and conversion of capitals, and the structure of social capital. After that, I summarize the research questions and outline the main findings of my doctoral dissertation, and show how the combination of these methods helped me in the process of obtaining these results. In conclusion, I come back to my main arguments and propose further use of new methods. --- Bourdieu's Research Programme: Fields, Capitals, and Habitus According to Swartz , Bourdieu's "political economy of symbolic power is perhaps the most ambitious and consequential project for the symbolic realm since that of Talcott Parsons". Bourdieu developed a "research programme" with the ambition of being applicable to all types of societies, in order to unveil reproduction of power relations among individuals and groups. It focuses on how stratified social systems of domination and competitive hierarchies persist and reproduce without explicit resistance and without the conscious recognition of individuals. Power stands at the heart of social life and its exercise requires legitimation through symbolic forms, which constitute and maintain power structures. The struggle for social distinction is a fundamental dimension of social life, where forms of capitals, with specific laws of accumulation, exchange and exercise, play a critical role. Bourdieu addresses the relationship between individual agency and social structure by proposing a theory of practice, which connects action to structures, and which undergirds his concept of habitus. Practices occur in structured arenas of conflict called fields, which connect the action of habitus to the stratifying structures of power. Sociology's aim is to perform "socioanalysis", where the task of the researcher is to unveil the "social unconscious" of society, i.e. the hidden dimension of power relations, in order to undermine their legitimacy. And because social sciences are themselves not exempt from processes of social differentiation, they have to be undertaken under the guide of "reflexivity", understood as a rigorous self-critical practice . While developing his theory, Bourdieu conceived several conceptual tools, which are inter-related and function as a system . In particular three6 concepts are critical to understand Bourdieu's research programme: fields; capitals; and habitus7 . A field is a relatively autonomous social space, which is defined by its object of dispute and the specific stakes related to this object . Agents struggle for the detention of the "specific" capital of the field and/or its redefinition. This capital is unequally distributed, ergo there are dominant and dominated individuals, occupying positions according to the volume and composition of the resources they detain. This unequal distribution determines the structure of the field . Fields can refer to a variety of social entities, such as professions , academic disciplines, arts, politics, or the private and public economic sectors. The dominant individuals of all the other fields are themselves involved in the "field of power", a field where the stake is to detain power over all the other sources of power . Capitals are defined as forms of assets or resources, involved in systemic processes allowing their garnering by those who possess them. Capitals have the potential to accumulate, store and retain advantages. This accumulatory potential sometimes permits to unlock advantages in other fields, by a process of conversion from one capital to another . Capitals can take various forms. Economic capital corresponds to material advantages within mercantile relationships. Cultural capital is related to cultural and educational resources. Social capital refers to the aggregate of the actual or potential resources, which are linked to the possession of a durable network of relationships of mutual acquaintance and recognition. Symbolic capital refers to each form of capital that is unrecognized as a capital and recognized as a legitimate competence . Finally, a specific capital relates to every field's particular capital. Habitus is a system of long-lasting and transposable embodied dispositions, which, as a structure, have been structured by the social environment of the individuals, but also work as structuring devices, in the sense that they generate and organize practices and representations ). Habitus organizes the ways that individuals act, think, feel, and perceive. In fields, habitus is a mediating concept between the space of positions of the individuals, defined by the overall structure of capitals they detain, and the space of positiontakings, which corresponds for example to the production of a particular piece of art in the artistic field, or a particular scientific work in the scientific field, as well as to the critical judgement on the production of other individuals actively involved in the field. Habitus implies a structural homology between these two spaces . In a more analytical dimension, Bourdieu sums up his methodological approach in terms of three levels. First, one must study the relation to each field vis-à-vis the field of power and see whether individuals from this field are situated in a dominant or a dominated position within this space. Second, the researcher must map the objective structures of relations between the positions occupied by agents involved in the field, through configurations of capital detention. Third, one must analyse the habitus of agents, the different systems of dispositions they have acquired by internalizing a determinate type of social and economic conditions, which, through a particular trajectory within the field, have found a more or less favourable opportunity to be actualised . This threelayered methodology of field analysis implies to consider that the concepts of field, capital and habitus are inter-related, that they function as a system and must not be analysed separately. In the following sections, I focus on the methodological strategy and the use of quantitative methods in my PhD dissertation that are linked to the development of Bourdieu's framework and conceptual tools. In the first section dedicated to data collection strategy, I define and exemplify prosopography. The second section focuses on methods of data analysis and I concentrate on three particular techniques: multiple correspondence analysis, sequence analysis, and social network analysis. Each time, I give examples of applications of these methods from my own work. --- Bourdieu and Prosopography: A Data Collection Strategy Prosopography consists in the investigation of the common background characteristics of a group of individuals by means of a collective study of their lives . Prosopography originated from the discipline of history . Its core idea is to delimit a group based on certain characteristics, and, on the basis of an available corpus of sources, to collect systematic data on a given set of indicators concerning their social properties, in order to understand certain social mechanisms present in particular groups . A prosopographical approach in Bourdieusian studies is not new and probably can be traced back in the early 1970s. It is linked to the ambition of gathering systematic data on individuals belonging to the same field. This collection of data should correspond to their resources associated with social origins, educational backgrounds, trajectories and position in the global social space and in the field, measured through detention of specific capital, and their position-taking in matters crucial to the field. The object of study is not the individuals per se, but rather the field's history and structure, which also leads to understanding individual dispositions . Bourdieusian prosopography equally allows quantitative analysis of common trajectories and qualitative focus on particular individual cases, as long as both are understood in the framework of the social structure of the field. Such a strategy implies a circumscription of the population and a thorough knowledge of what is at stake in the field before collecting the data, as well as a constant development of the collection of new indicators. Data collection works as a discovery process, where new meanings of indicators are found, which provides a very inductive knowledge of the field. Prosopography was central to my doctoral work, and allowed qualitative and quantitative insights into the Swiss field of economic sciences. Data collection took place out of a collective effort, as part of a research project on academic elites8 related to the "Swiss elite database"9 platform, created more than ten years ago, and currently containing biographical entries on more than 35,000 members of Swiss elites. Elites are defined, according to a positional definition , as the individuals who sit at the helm of the most important institutions of Swiss society . The field was delimited on the basis of the position of university professor in economic sciences. To identify this population, I relied on a source centralising all the academic personnel in Switzerland: the Swiss university directories , which was published between 1907 and 2008. I completed the collection of professor names with the help of university annual reports. To collect systematic biographical data, I relied on several sources: the Swiss Historical Dictionnary, the Who's Who in Switzerland, several university anniversary monographs, university annual reports, necrologies in newspaper archives, online curricula, information within PhD dissertations, university archives, and internal digital databases. The Swiss elite database is a relational database, constituted of multiple linked tables. Three tables were particularly relevant for this collection: a table centred on individual properties ; a table centred on institutions ; both tables linked by several professional functions collected in a mandate table with a date of beginning and a date of end 10 . I collected information on such properties as education, careers, and academic work of these professors. Two properties were particularly challenging to find systematically: occupation and religious confession of the parents. Those inherited properties are often kept silent in the public biographies, since they are part of the "private" life. I had to abandon the idea of treating them quantitatively at some point. Besides these, I coded the sex of the professors. Also, effective resources composing the specific capital of the field were collected. Indicators of institutional academic power resources were: university vice-chancellor; department dean; board member of disciplinary societies; board member of diverse academic societies. Indicators of external resources were: board member or executive director of corporation; board member or executive director of the 110 largest Swiss firms; board member of the main economic interest associations; member of the federal parliament or government, or a cantonal government; senior civil servant ; member of an expert committee for the federal administration; place in the Neue Zürcher Zeitung ranking of economists . Indicators of social capital were: number of members of the Swiss political, administrative, economic, and academic elites of which the professor has supervised the PhD thesis; number of elite members having the same PhD supervisor; number of co-applicants in a research project funded by the Swiss National Science Foundation . As indicators of international resources, I considered the following: citizenship at birth; country of the PhD; different countries of stays outside Switzerland; PhD in English in a non-English speaking country. For scientific resources, I collected: amount of funding by the SNSF; discipline of the co-applicants of research projects, to have a scale of interdisciplinarity; number of citations in the Web of Science; number of citations in Google Scholar; place in the ranking of authors in the IDEAS/Research Papers in Economics database. Finally, I coded a set of position-takings: use of mathematics and statistical techniques in the PhD dissertation; domain of specialty. Table 1 summarizes the number of economics and business professors for the dates considered, as well as their gender and citizenship distribution11 . Three particular trends emerge from this table: 1) There is an increase of proportion of business studies professors during the whole 1910-2000 period; 2) the field is utterly masculine, with no women professors before the 1980s and only a few in the recent period; 3) the share of non-Swiss economics professors forms a U-shaped curve, with a large proportion of foreigners at the beginning and the end of the period, meanwhile this internationality is more modest until the 1980s in business studies, but with a large increase since then. Prosopography, as a data collection strategy consisting in the investigation of a set of common properties of a group of individuals, is compatible with a Bourdieusian framework, since it allows working on a delimited field, on the distribution of capitals, and on the homology between positions and position-taking, focusing on particular individual habitus. When Bourdieu himself used prosopography in his works , it is not always clear if he and his team had made their data collection in an equally rigorous manner as historians usually do. Nonetheless, a large group of Bourdieusian scholars have since then used prosopography in a thoroughly rigorous way and it has become one of the "classical" collection strategies to study fields . However, proposography in itself only allows basic statistical features, such as counting the absolute or relative shares of certain properties. In order to go a step further in the comprehension of a field, we need statistical techniques, which account for the description of its structure, among others in a multivariate perspective. --- Bourdieu and Quantitative Methods In this section, I develop three quantitative methods: multiple correspondence analysis ; sequence analysis ; social network analysis . Each time I trace parallels with Bourdieu's research programme and insist on new developments of quantitative analysis within Bourdieu's framework using my dissertation as a material. These three methods allow classifying/categorizing and identifying subgroups, as well as working on typologies and subgroup profiles, but I mostly focus on MCA for this matter. SA permits us to work on the temporality of individual lives, by looking at trajectories of capital accumulation and conversion. Finally, with SNA, one can study the configuration of a particular capital, i.e. social capital, especially its composition, its size, and the more or less favourable position of individuals in a whole network based on several measures of centrality. I develop these different contributions more in detail in the following sections. Each of these methods has been used and developed in social sciences in association with a particular body of theoretical literature. MCA was developed amongst others with Bourdieusian literature, but new features of the method can draw bridges with other sociological theories or other disciplines. Sequence analysis was used first in sociology by Andrew Abbott and is currently used by life course sociologists. Social network analysis is the method of sociologists working on inter-individual ties and connections. By using these methods, I am able to establish a dialogue between Bourdieu's research programme and other theories. --- New Developments of Multiple Correspondence Analysis: Study of Subgroups Multiple correspondence analysis has been the most widely used method of quantifying with Bourdieu. Bourdieu himself used this method on several occasions ). MCA consists of a geometric representation of the structures of a multiple crosstable between a set of active variables, in the sense that they contribute to construct the space. The complexity of the association between these variables is reduced through different dimensions of oppositions among levels of variables. The first dimension, or axis, represents the most dominant opposition, the second axis the second most dominant, etc. Each axis constitutes a dimension in a multi-dimensional space, and each level of a variable and each individual is located as a point in this space. The closer individuals are in the multidimensional space, the more likely they are to share properties in common. Conversely, the closer levels of variables are in the space, the larger is the group of individuals who tends to share them. A set of illustrative or supplementary variables, which do not contribute to the construction of the space, can be projected into this space. The contribution from a given level of variable or a variable to an axis indicates its importance to the construction of the axis. Levels of variables and variables with a contribution above the average contribution are emphasized in the interpretation of the axes. We normally interpret a certain number of axes given indicators of their explained variance, of inertia12 . Bourdieu was among the first sociologists to use this method . He found it particularly attractive, since it allows to spatialize individuals on the basis of their social properties and resources, projected as active variables. Individuals are situated in a particular position within the field. Thereafter, a set of position-takings can be projected as illustrative variables, and the homology between the space of positions and the space of position-takings can be uncovered. On this basis the existence of particular individual habitus can be deduced and the specific capital of the field can be visualised through the different resources of individuals. Bourdieusian scholars continued to use this method until recently 13 . Nonetheless, even if MCA has been widely used for a long time, developments of the method have not necessarily always been exploited. I recommend it for the study of subgroups within fields, which has not been explored enough until now. Ascending hierarchical clustering is usually performed. It consists in taking the coordinates of individuals on a given group of axes and using a cluster algorithm to identify groups, where the individuals, based on their position on all the given axes, are the most similar to each other. At the same time, the groups must be dissimilar from one another. Once the groups identified, and a number of groups have been chosen, we qualify them according to their association to several levels of variables . For the second solution , I recommend using class specific MCA , which allows studying the specific traits and detention of capitals of a group of individuals, while conserving the distances between these individuals defined in the initial space. To do so, individuals are split into several categories based on a given illustrative variable. Then, CSA is performed on a given subgroup, which allows finding new axes of opposition among this particular set of individuals. Then the principal axes of this subgroup are compared to the axes of the MCA for the whole group by looking for associations between the old and the new axes. Thereafter, the contributions from the active variables to the new axes are compared to the old ones . CSA is particularly useful for working on subgroups, which, depending on data availability, can be coded according to gender, sexuality, race/ethnicity, religion, social class, age, etc. I give here an example of MCA and CSA, using the group of the professors of economics in Swiss universities in 2017. It allows me to uncover the structure of this disciplinary field and the configurations of its specific capital. I do an MCA with 13 active variables 14 and 34 active 13 A list of "best practices" can be found in Lebaron . 14 They are the following: number of citations in Google Scholar; rank in the RePEc ranking; country of the PhD; stay in the US apart from the PhD; dissertation in English in non-English speaking country; "local" career ; executive board member of a levels of variables 15 . I retain the first three axes for interpretation 16 . These three axes are represented horizontally in Figures 4-6 17 , by displaying only the levels of variables which contribute to each axis above the average contribution. Axis 1 displays mostly an opposition linked to internationality related to the US and the UK vs. localism, as well as an opposition between scientific vs. "worldly" resources. Having obtained a doctorate in an Anglo-Saxon country and being ranked high in the RePEc ranking oppose to having obtained a doctorate in Switzerland and to a high volume of institutional capital, such as occupying executive positions in Swiss institutions or being recognized in the Swiss medias. Axis 2 is structured according to the volume of scientific and academic power resources and different international resources. Resources linked to a high number of citations in Google Scholar as well as positions of faculty dean and member of the executive committee of academic organisations oppose to a lower number of citations and a lower rank in the 10% RePEc ranking. Likewise, a PhD obtained in the US opposes to a PhD obtained in Germany . Finally, Axis 3 mainly displays an opposition between absence vs. a high volume of scientific resources: no Google scholar profile and no position in the 10% RePEc ranking opposes to high volumes of resources according to these same indicators. corporation; non-executive board member of a corporation; position in the Neue Zürcher Zeitung ranking of economists; expert committee member for the Swiss federal administration; member of the board of the Swiss Society of Economics and Statistics; department dean; science policy "mandarin" . 15 Analyses are done with the R package soc.ca . 16 Axis 1 accounts for 56.5% of the modified rates, Axis 2 for 21.4%, and Axis 3 for 12.1% . 17 We normally should not trust visual representation of variables in MCA and rather read tables of the contribution of variables and levels of variables to each axis, in order to interpret our results. Nonetheless, in order to save space, these tables are not presented, but they are available upon request to the author. In a second step, I worked on subgroups profiles to uncover inequalities of resources within the field according to the sex and citizenship of the individuals. I divide them according to two illustrative variables: gender and citizenship . Table 2 shows the association between the three axes of the MCA and the axes of the CSA of those groups 18 . 18 According to the modified inertia rates for each level of variable, I retain two axes for the categories "woman" and "Swiss", and three for the categories "man" and "non-Swiss". The graphs for the levels of variables contributing above the average contribution for each of these CSA axes are displayed in appendix and must be read again from left to right. Notes: Cosines and correlation coefficients give the same information. Correlation coefficients are standardized between -1 and 1, where -1 is a perfect negative association, 1 a perfect positive association and 0 no association at all. For cosines, 0 shows a perfect positive association, 180 a perfect negative association, and 90 no association whatsoever. Strong associations are in bold characters. The percentage next to the number of axes is the percentage of cumulated modified rates they account for. Only 10% of the professors in 2017 are women and the logics of oppositions of capitals are particularly male-centred . However, the structure of axes of the CSA is quite different for women, in particular if we look at the contributing levels of variables . Axis 1 of the CSA builds along the volume of resources, based essentially on Swiss institutions of particularly important external power , without the scientific capital component identified earlier. Women do not distinguish themselves according to important scientific assets vs. important external resources as the primary source of distinction within the field, but rather oppose between those who detain massive amount of capitals external to the field to those who do not. Axis 2 is organised around an opposition along the volume of scientific capital as well as different national sources of international resources. It is very similar to the second axis of the MCA, except that the volume of scientific capital does not opposes a high vs. a low volume, but rather a relatively low volume vs. the absence of it. This structure in two axes suggests that women do not strongly differentiate themselves according to scientific logics, mainly because they do not detain a large amount of this type of capital, and occupy from this point of view dominated positions when compared to men 19 . The CSA on the Swiss professors shows that the two retained axes relate to the volume of resources. Axis 1 mainly corresponds to an opposition among the volume of external resources , while Axis 2 opposes high volumes of resources related to the same institutions and high volumes of scientific resources to the absence of these resources. For the non-Swiss professors, Axis 1 and Axis 2 relate to a combination of diverse indicators of the volume of scientific resources , meanwhile Axis 3 mainly represents an opposition linked to the country of the PhD . Swiss professors are much more invested in worldly logics related to institutions of power, while non-Swiss professors in Switzerland follow much more scientific logics. MCA has been developed in social sciences in accordance with Bourdieu's programme to study field structure by spatializing individuals' position and position-takings on the basis of their capitals. Nonetheless, new features of MCAs can be useful to add new theoretical and methodological developments to Bourdieu's analyses. The study of subgroups within fields can be made by cluster analysis and CSA. In particular, CSA allows looking at the structure of capitals within a subspace of the field and characterizing it thoroughly, and has not been widely used until now. It could help studying differences in the uses of capitals, as well as their unequal distribution among different groups 20 . This could address new interrogations related to inequalities based on social properties, linking Bourdieusian methodology to other disciplines, such as gender 21 or postcolonial studies. --- Sequence Analysis and Life Course Trajectories of Accumulation and Conversion of Capitals Another focus on Bourdieu's theoretical framework can be realised through the analysis of life course trajectories. Bourdieu did not focus on individual career histories at the aggregate level, since he relied more on a "snapshot" approach . Nonetheless, working on the dynamics of fields, habitus, and capitals constitutes a promising development of his theory. In this sense, sequence analysis constitutes a relevant method, since it allows to work on timing, ordering, and duration of sequences of life course trajectories. Sequence analysis is the statistical study of states or events. It consists in the comparison between chronological sequences of states, which can be characterized according to their similarities and dissimilarities . Within a sequence, a time unit is attributed to states. Through an optimal matching algorithm, SA measures the degree of similarity of sequences taken two by two and a metric distance is created, which attributes costs following the number of operations to transform a sequence into another. Three operations are possible for these transformations: insertion , deletion , substitution . The higher the costs, the more the sequences will be dissimilar. Each substitution 20 Bourdieu thematised the volume and composition of capitals, but also a "third" dimension, corresponding to the evolution across time of this volume and composition. CSA can in fact help studying this evolution and, thus, field "history". Indeed, by performing MCA on individuals evolving at different periods in time, and by thereafter performing CSA on cohort variables, it is possible to compare the oppositions of capitals at different periods. I have presented my first results on the evolution of the capital composition between 1980 and 2000 under the title "Forms of Social Capital in the Field of Economists. Between Scientific and Worldly Power" at the "Forms of Power in Economics: New Perspectives for the Social Studies of Economics between Networks, Discourses and Fields" conference at the University of Giessen . This communication will be part of a forthcoming collective book on the power of the economists for which I will be a co-editor. 21 See Hjellbrekke and Korsnes using CSA to study women's capitals in the field of power in Norway. operation can be associated with a cost, which can be either constant, or theoretically or empirically defined . Once these costs established, a form of automatic classification, generally a hierarchical clustering, is used. It consists in grouping similar sequences in homogenous groups, which differ from one another the most . The number of clusters can be chosen empirically or by using a variety of indicators of fit . In sociology, SA is useful to compare trajectories regarding particular attributes along the life course of a given group of individuals. Combining SA with Bourdieu's framework is possible by conceptualising it through a more biographical approach. A particular habitus cannot be understood without thinking in terms of temporality: to become a "structured structure", individuals' socialisation takes time, and habitus is therefore a by-product of an individual and a collective "history" . Capitals are involved in long-term processes of accumulation and conversion , and social trajectories are understood as the succession of positions in the successive states of the field within which individuals evolve . Capitals acquired or inherited in particular fields can be accumulated over time. This accumulation provides advantage within a given field. Moreover, capitals can be converted into other capitals with influence in other fields . Life courses can be conceptualised as movements through social space and through participations in several fields. Positional changes within a specific field tend to correspond to slow and continuous accumulation processes, while changes from one field to another are often related to more or less rapid processes of conversion of one capital into another, which are likely to be more fundamental on the level of the subjective experience of individuals . Across the various positions occupied within the social space, measured through a specific state in SA, is attached a certain type and volume of capitals, which can be accumulated and, at a certain point and under certain conditions, converted. SA provides the possibility to investigate the structure of acquisition, accumulation, and conversion of various capitals and is in this sense much richer than displaying the volume and structure of capitals in an ahistorical manner, as basic features of MCA do. SA allows to work on the comparison of diverse trajectories of accumulation and conversion of capitals 22 . I give a quick example stemming from my PhD data. I focus on the academic paths of accession to a professor position between the age of 21 and 50 23 for professors of economics and business at the dates of 1957, 1980 and 2000. To do so, I coded six states, which are strictly exclusive from one another: 1) A period of education, which is covered from the age of 21 until the age of the doctorate; 2) Extra-academic positions, associated mostly with professional occupations in the private sector and the state administration, which relate to forms of capitals which are external to the field; 3) Postdoctoral positions: pre-tenured positions between the end of the educational period and the appointment to a professor position, which represents the "lowest" volume of resources within academia; 4) Associate professor positions: the "lower" form within the symbolic hierarchy of professor positions; 5) Full professor positions: the higher form of symbolic resources within this hierarchy; 6) Institutional executive positions within universities and other academic societies, associated with academic power resources. After having used a substitution-cost matrix determined theoretically, I ran a hierarchical cluster analysis using a Ward criterion. I chose then empirically a partition into four clusters 24 . Each cluster corresponds to a particular path of accumulation and conversion of capitals within academia. The clusters are represented in Figure 7. These typologies allow me to answer the question of how the professional trajectories of economic sciences professors are structured in terms of accumulation and conversion of capitals 25 . 24 Analyses were performed through the TraMineR package in R . 25 These results were presented under the title "Pathways to Professorships in Swiss Economics and Business Studies Departments since the 1950s. The Importance of Scientific, International and Social Capitals", at the 5 th annual conference of the Society for the History of Recent Social Science, University of Zurich, June 8-9 2018. They are part of an ongoing work on the different academic pathways to accede to a professor position in the Swiss context of economics and business studies. 1980 and 2000 Note: X axis displays age between 21 and 50 years old. Y axis displays frequencies. Extra-academic conversions trajectories display accumulation of extra-academic resources in the early professional trajectory, and only late conversions into associate professor positions. These are mostly the careers of "practitioners", who are able to convert these external resources into an academic ones, but only associated with a medium-level volume of academic capitals. Excellence cursus honorum careers correspond to a much more "linear" path to become a professor, with postdoctoral positions until the thirties, a relatively short time-period of associate professor and a long time as full professor. These trajectories are associated with a process of long-term, but steady, accumulation of academic resources, which are highly valued on a symbolic level, and are deemed "excellent" academic careers. Science mandarins conversions correspond to similar beginning of careers to the precedent cluster until the full professor appointment. What changes is that these professors are able to convert the academic resources they have accumulated into academic institutional positions. They are without a doubt part of the group of professors with a very high influence outside the economic sciences field, leading science policy in Switzerland. Finally, Early excellence careers correspond to the quickest accumulation of academic resources and full professor appointment. Also, to some extent, they are able to convert these rapidly accumulated assets into academic executive resources and sit on important academic boards. SA is useful to study timing, order, and duration of different sequences of states. This method provides the possibility to study individual trajectories of accumulation and conversion of capitals in relation to one another, which provides much richer information than studying temporally "flattened" configurations of capitals trough basic a-historical features of MCA26 . However, these two methods have to be considered as complementing one another. One of the main limitation to SA is that it does not take into account the ways in which careers are located in the spatial social structure . The most fruitful strategy would be to combine both of these methods, for example by projecting typologies of sequences into MCA to assess the relevance of adding life-course to a space of "snapshots" of positions, considering career structures as a particular resource within a social space . One must also note that, like MCA, it is also possible to work on subgroups through SA, either by identifying typologies of careers through clustering, as done here, or by working on subspaces defined on the basis of particular properties . --- Social Network Analysis and Social Capital Despite having dedicated almost his entire work to the relations and differences among social groups, Bourdieu's works on the ties between these groups understood as power resources, or said otherwise on social capital, remain very scarce ). Nonetheless, during these last few years, some scholars have tried to operationalize Bourdieu's work on social capital through network approaches. Here I present social network analysis , possible bridges with Bourdieu, and an example of my research. SNA is a relational method that studies ties between either individuals or institutions , or both at the same time . SNA can help us studying the position of an individual in a network, which is not based upon geometrical distances, but on links between the whole group of individuals. Graphically, networks are represented by "edges" between "nodes" . Edges can be undirected or directed. A "structural" network is delimitated through more or less institutionalised social boundaries and represented in its entirety. An "ego-network" corresponds to individuals linked to a particular individual . Graphs in SNA allow a visual interpretation of the data, by observing the structure, density, and dispersion of the ties. To be read more easily, graphs are often dispersed according to spatialisation algorithms, which generally minimize the variance of the edges, while trying to avoid that edges cross each other, and that nodes overlap. To investigate in more detail the structure of the network, indicators of density are available and, at the individual level, indicators of node centrality. To study subgroups, several algorithms allow identifying subcomponents . SNA is useful for studying the configuration of a particular form of capital developed, among others, by Bourdieu: social capital. Bourdieu always encouraged the use of MCA as a relational method and, by doing so, rejected other relational methods such as SNA . Nonetheless, his concern was more with the kind of relationship studied than with the method. SNA, when studying Bourdieu's account of social capital 27 , should focus on objective relations, i.e. power relations within a space structured by the possession of capitals, rather than interpersonal or intersubjective relations, usually studied with this method . In Bourdieu's sense, social capital needs to be considered in relation with the volume of other individuals' resources , and can be considered as a form of symbolic capital as well, since having a large volume of network relations also leads to "prestige" . The volume of social capital of an individual depends on the size of his/her network and on the volume of other capitals detained by each agent he/she is connected to . A large volume of well configured social capital provides an individual with a favourable position within a given field. SNA is a complementary tool to MCA since it focuses on existing objective links of mutual acquaintance and recognition and not on relations between individuals within a given volume and structure of capitals. Some scholars encourage a joint use of both methods. I provide an example from my work: the network of co-applications for projects funded by the Swiss National Science Foundation 28 of economic sciences professors in 2000 29 . The 27 Social capital understood as a resource has been widely studied. It has been understood as a source of social coordination and integration or, as inspired by Granovetter , in terms of networks structures and assets, where the importance of this resource depends on the relative centrality of individual position. Lin develops and systematizes a theory of social resources based on Granovetter's distinction of strong and weak ties, meanwhile Burt gives importance on the exclusivity of a position in a network based on information access, control or diffusion. In this latter model, individuals can take advantage of "structural holes", understood as positions which control and mediate information between two otherwise unconnected network components . Different uses of centrality indicators stem from these diverse understandings of social capital as an resource. I focus on Bourdieu's understanding of social capital, but nevertheless acknowledge other acceptations of the concept as a contribution to Bourdieu's relational sociology, which guide the use of different centrality measures understood as different forms of social capital. 28 The data stem from the SNSF "P3" database: http://p3.snf.ch/. link between individuals is defined by having jointly been funded for a research project, which leads to a one-to-four-year research collaboration, on average. This link is an institutional link, which formalizes a more or less close relation of acquaintance and joint scientific practice. I investigate three particular forms of social capital within the field of economic sciences, to assess individuals' centrality and quality of resources, and influence within the field understood as a network of relations. First, interdisciplinarity can be associated with the specific capital of the field. A large level of interdisciplinarity must be contrasted with a lower level and gives information on the composition of the ego-network of each professor, in terms of the particular qualities detained by other individuals he or she is connected to. Second, the absolute number of individuals connected to the professors gives information on the size of an individual network and constitutes a raw quantitative measure of an individual's social capital. Third, the more or less favourable position in the network gives information on the number of individuals more or less directly connected to a professor's personal network, e.g. in terms of being on the shortest paths between a pair of individuals, providing insights on the influence of each professor. Figure 8 represents the size of the network by degree 30 centrality, and Figure 9 represents the position in the network by betweenness 31 centrality. Disciplines are represented by the colours of the nodes. However, since visualization in networks cannot work as proofs , I focus on a small group within the network, i.e. the ten most central professors according to degree and betweenness centrality . This allows me to work on the particular profile of individuals with a high volume of social capital, while considering some of their social properties 32 , focusing on some features of the structure of social capital within the field. 29 The network is composed of 1672 ties between 751 individual nodes. Among them, 156 professors of economics and business studies in 2000 have at least one tie. The 105 other professors are not represented in the graphs. 30 An indicator of centrality, which corresponds to the number of nodes with which a given node is connected to. 31 Another indicator of centrality, based on how often a given node falls on the shortest path between two other nodes. The position within the network can be assessed by a range of other centrality indicators as well. 32 Sex, university of teaching, discipline, teaching topic, number of research projects obtained through the SNSF, money granted through these projects, and interdisciplinary rate . To better assess degree centrality, I give information on the weighted degree, here the total number of scientific collaborations, independently of the number of other individuals connected to a professor. By looking at some properties of the ten most central professors according to degree centrality , i.e. the ones endowed with the highest volume of social capital from the viewpoint of direct ties to a large number of other researchers, one sees that they largely consist of men , that they mostly teach in French-speaking universities and that they are mostly economists . In business studies, the more connected professors teach business informatics. These latter professors are particularly interdisciplinary , detaining a particularly heteronomous form of social capital. To the contrary, two of the three econometricians have the highest disciplinary ratio of the whole group . Some, as the business scientist C. Pellegrini, have obtained a very large number of projects compared to the others , involving a very large amount of money as well , and have collaborated much more than once with the same group of peoples . This type of researchers shows a particularly tightly interconnected personal network and an intensive research practice, with several collaborations with the same people. Others, such as the Basel professor G. Sheldon, only have obtained a low number of projects with a lower amount of money , and, despite being tied to an almost as large number of researchers as Holly , have collaborated with them, with one exception , only once. Degree centrality constitutes a relevant first indicator to map the structure and inequalities of social capital within a field. Contrary to the two previous methods, SNA permits the study of the configuration of a particular capital, namely social capital. It is possible to investigate particular features of individual networks, among others their composition , their size and their position . --- Using Prosopography and Relational Methods to study the Rise and Transformations of Economics and Business Studies in Switzerland In this penultimate section, I summarize more precisely the outline of my doctoral dissertation, as well as its main questions, arguments and findings. I stress how the combinations of a prosopographical strategy and use of the relational methods developed in this paper helped me to obtain these results. As stated above, I studied the structure of the scientific field of economic sciences, divided into economics and business studies in Switzerland, focusing on university professors. I asked two global research questions: 1) How did economic sciences professors acquire power and influence in Swiss universities, as well as in the state and the private sector? 2) How was the space of economic sciences professors structured according to autonomous and external logics, and how did this structure change during the 20 th century? To answer these questions, my dissertation was divided into four main chapters. A first chapter focused on the rise of economic sciences in the Swiss academic field . Having followed a thorough prosopographical strategy on all university professors in economics and business studies from 1819 onwards, as well as other archival research on universities, I detailed the numerical increase of students and academic personnel compared to other disciplines and the institutional and disciplinary affirmation of economics and business studies through descriptive statistics. I showed that professors of economic sciences have been able to accumulate a very large amount of capital of academic power, which relates to executive positions of power in academic institutions. As an example, in the recent period, economic sciences were the most represented among university vice chancellors . A second chapter was centred on these professors' place in the field of power, and as occupying elite positions in the political , economic and administrative fields. Again, I collected biographical data and relied on already available data on Swiss elites, thanks to former research projects. I investigated the profile of the professors occupying such positions, the structure of their academic and extra-academic careers, and their indirect social capital through PhD supervision of future elite members. I showed that among university professors members of Swiss elites during the recent period, economic sciences were the first discipline represented among economic elites, and the second among political and administrative elites, after law. I also showed a clear separation between "pure" academic careers, and careers partially turned towards extra-academic positions, and the persistence of this separation over time, thanks to sequence analysis. A third chapter focused on the international dimension of Swiss economic sciences. It studied the distribution of professors through Swiss or foreign citizenship, as well as professional stays abroad, as a doctoral or postdoctoral researcher, or as a professor. The structure of the careers at the national and international level were investigated, in particular by considering the international hierarchy of national spaces, where a stay in the more well-ranked North-American or British universities matters particularly. Sequence analysis helped to identify the more national professorial careers and the more international ones. International careers were put in relation to scientific capital, which was measured through citations in "prestigious" journals, using data of the Web of Science Social Sciences Citation Index. I showed that Swiss economic sciences experienced a process of "nationalization" of professor profiles after World War I, and of re-internationalization since the 1970s. I also observed that there was a significant association, from the Swiss viewpoint, between scientific capital and internationality, but also that a definitional shift of this link happened during the 20 th century, from stays in Germany and France to stays in the US. A fourth and final chapter focused on the structure of the space of economic sciences, and particularly on the space of positions of the professors related to the distribution of diverse capitals, as well as on their position-takings on several ways of doing science . To do so I used multiple correspondence analysis and social network analysis, which was useful to obtain an indicator of interdisciplinarity, by assessing the disciplines of all the scientists to whom a professor was connected through a network of scientific collaboration projects. Thanks to this methodology, I observed a clear opposition between a scientific and international pole, and a pole detaining most of all national resources, as well as capital of academic power, and economic and political capitals. The scientific pole increasingly used mathematics over times and each of the two poles had its own research areas. Dominance within the space, apart from using mathematical abstraction and the study of particular objects, was also reflected in a relatively sustained interdisciplinarity, particularly with the natural, experimental, technical, or medical sciences. In conclusion, I argued that it is by this "division of labour" between two poles of professors, those linked to scientific and international "excellence" and those related to the administration of universities, corporations and the state, and by the historical strengthening this division, that economists and business scientists were able to be "everywhere" and were able to reinforce their power in Swiss society. --- Conclusions In this article, I presented Bourdieu's research programme and its possible developments with the help of quantitative data analyses, exemplifying each method through some of my PhD work. In a part dedicated to data collection, I underlined the relevance of using a prosopographical strategy, in order to circumscribe individuals evolving in a field and gather systematic data on their properties, understood as different resources having influence in this given space, which allow studying its history and structure. In the section on data analysis, I presented three methods in particular. 1) Multiple correspondence analysis has been used by Bourdieu and other researchers to study empirically the distribution of capitals in a given field, as well as the structural homology between positions and position-takings. I proposed, as a further development of MCA rarely undertaken, the study of subgroups within a field, through class specific MCA. 2) Sequence analysis allows working on accumulation and conversion of capitals during individual lives, with a specific focus on timing, order and duration of each state of capital detention in the sequence. To each state in a life course sequence corresponds a certain volume and type of capital, which can be accumulated and, under certain conditions, converted into another. Typologies of careers can be identified and it is possible to investigate subgroups' profile. 3) Social network analysis focuses on one particular capital, social capital, considered in relation with other individuals' capitals. In particular, at least three forms of social capital can be investigated: the composition of the network ; the size of the network of an individual; the more or less favourable position of an individual within the structure of the network. I have addressed the issue of working on new methodological developments, which were not underlined by Bourdieu before his death, and also on new research questions, originating from other domains or disciplines, such as history, network sociology, life course sociology, gender studies, postcolonial studies, etc. As stated in the introduction, I argue that when introducing new quantitative methodology into Bourdieu's research programme , we should think beyond Bourdieu's strict work, but still within his theoretical framework, which is relevant to the study of domination and power relations among individuals. I focused in this article on quantitative methods, but the exact same reflection might be true for qualitative analysis as well: ethnography, participant observation, interviews, analysis of historical documents, content analysis, and so on. Moreover, inter-operability between methods should also work between qualitative and quantitative analyses. This paper only constitutes a partial panorama of quantitative methods to be used within Bourdieu's framework. Quantitative discourse analysis, to analyse discursive positiontakings, often by combining a Bourdieusian and a Foucauldian or a Wittgensteinian perspective , has also proven relevant. Moreover, a technique, which was particularly distrusted by Bourdieu , corresponds to regression analysis. Regressions can nevertheless be useful to focus on associations and interactions between variables: e.g. to ascertain the interactions between social capital and a set of different capitals , or the interactions between diverse components of scientific and international resources . Further methodological developments remain to be made with, rather than beyond, Bourdieu. --- Declaration of Conflicting Interests The Author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. --- Funding The authors received no financial support for the research, and/or publication of this article. Appendix: Class-specific analyses on women, men, Swiss and non-Swiss Citizens as illustrative variables: Graphs by axis
This article focuses on the importance of quantifying Bourdieu's "research programme", linked with the concepts of field, habitus, and capital. It presents possible ways of doing statistics within this framework and argues that continuous methodological development should be pursued. To support this argument, the paper highlights the methodology and empirical results of a doctoral dissertation on the Swiss field of economic sciences. It stresses the relevance of using a prosopographical strategy and advocates further development of multiple correspondence analysis, and the use of sequence analysis and social network analysis. The main contributions of these methods concern the investigation of subgroup profiles in fields, the trajectories of accumulation and conversion of capitals and the structure of social capital. When asking whether or not we should think with or beyond Bourdieu when suggesting new methodological developments to his programme, this article argues that we ought to think beyond his strict written work, but still within his theoretical framework, which proves particularly relevant to the study of power relations among individuals.
Introduction W omen have entered academic medicine in signifi- cant numbers for almost 4 decades. The Association of American Medical Colleges formed the Group on Women in Medicine and Science as an official group in August 2009, providing further recognition of the importance of women's academic capital to medical academe. Nonetheless, women have not achieved senior leadership in rank or position compared with men, and there continues to be a gender disparity in pay-controlling for specialty, seniority, hours of work per week, publications, and grants-that has not improved from 1995. 1,2 Women also leave academic medicine at a higher rate than men do and bear a greater responsibility for child care and family responsibilities. 3,4 There is a need to understand the multiple factors associated with this lack of advancement of women and to investigate the environment in which they work. One aspect of the institutional environment, referred to as the academic climate, is defined as the formal and informal institutional attitudes and programs to promote gender equity in the workplace. Although a recent survey of US and Canadian medical school deans suggested that the culture for women had improved, 5 other studies have found that the climate in academic medicine fails to support women. 6,7,8 We sought to explore the opinions of individuals who have a leadership role to address the climate for women, including institutional members of the AAMC GWIMS. We conducted qualitative interviews to explore the gender climate for women in academic medicine as perceived by members of GWIMS and the Group on Diversity and Inclusion of the AAMC as senior leaders with longevity at their medical schools with a unique perspective over time. --- Materials and Methods Our qualitative study is part of a larger longitudinal followup survey of faculty at the 24 US medical schools that were part of the 1995 National Faculty Survey. 9 The parent study randomly selected 24 medical schools in the continental United States, balanced by AAMC geographic region and private/public status. For our study, we sought to gain an understanding of the current gender climate across these institutions, utilizing the random-selection process to collect data on a representative group of medical schools. We conducted qualitative key-informant interviews to explore the gender climate for women in academic medicine, identifying individuals with first-hand knowledge of the academic community. We chose GWIMS and GDI representatives as key informants because of their knowledge of issues affecting women and minorities for in-depth, semistructured individual interviews as our data-gathering method for a qualitative assessment of gender climate at these institutions. We sought GWIMS and GDI representatives or those who served in a GWIMS or GDI role. If GWIMS or GDI representatives had less than 10 years of seniority within the institution of focus, we used referral sampling to request the names of other senior faculty with a significant institutional memory of the school and conducted an additional interview . We obtained informed consent prior to each key-informant interview, which was audiotaped and transcribed. Interviews were 50 minutes on average and were conducted by four trained interviewers not known to the interviewees. We awarded faculty a modest monetary incentive for participation. The semistructured interview, developed from a literature review and from our prior research, included a number of questions about the perceived gender climate: ''What is the climate of your institution for female faculty?'' Interviewers probed for positioning of women in the institution, perceptions of gender-equitable satisfaction with position, compensation, and opportunities for advancement and promotion. ''Has there been any assessment of faculty climate in terms of gender equity?'' ''How has the climate changed in the span since 1995?'' We analyzed interview data in two phases. The research team collaboratively developed an a priori coding scheme based on content areas covered in the structured-interview guide. Four members of the research team read and coded the transcripts, using HyperRESEARCH version 3.0 . We assigned two primary research team coders to each transcript and reached intercoder agreement, using a standard approach described by Carey. 10 We coded relevant content areas inductively to identify themes that emerged from the interviews. The final themes detailed in this article describe the gender climate. Quotes are identified by the respondent's rank, number of years at the institution, group , and gender. This study was approved by the institutional review boards of Boston University School of Medicine and Tufts Health Sciences Campus. The Tufts IRB reviewed on behalf of Massachusetts General Hospital through the Master Common Reciprocal Agreement. --- Results The final sample comprised 44 individuals representing 23 schools, as 1 institution declined participation. We interviewed 22 GWIMS, 20 GDI representatives, and 2 senior faculty, who were identified and approached for participation by referral sampling. GWIMS representatives were all female, with 18 professors and 4 associate professors. The mean age of GWIMS participants was 58 years; on average, each had at been at her institution for 19 years. Eighteen of the GWIMS informants identified as Caucasian, 2 as Asian, and 2 as African American. Half the GDI informants were men; half, women, with 13 professors, 6 associate professors, and 1 assistant professor. The mean age of the GDI representatives was 55 years; on average, they had been at their institutions for 18 years. Four self-identified as Caucasian, 2 as Asian, 10 as African American, and 4 as Hispanic. These faculty members were in senior leadership positions . We identified five themes from the qualitative responses on gender climate: a wide spectrum in the perception of the current gender climate ; continued lack of parity in rank and leadership by gender ; continued lack of retention, the ''leaky pipeline'' ; continued lack of equity in compensation by gender ; and the disproportionate burden of family responsibilities and work-life balance on career progression for women . Participant quotes for each theme are displayed in the tables. --- Perception of the current gender climate Climate descriptions across the 23 institutions covered a broad spectrum, with no consensus on the overall progress for women. There were several distinct descriptions of the institutional climate : Four respondents described the climate as ''an old-boys' club''; others, as one with more subtle gender issues; several, as a neutral climate or as improving because of the higher numbers of women. Other respondents regarded gender-equity issues as resolved; a number described a lack of resources to accomplish change; two, with truly significant progress; and a number with inconsistency across departments and specialties. Respondents described the ''old-boys' club mentality'' climate as one with errors of omission and a lack of awareness of the mistakes that are made, reporting that there was little recognition of women's accomplishments and a persistence of unintentional gender bias. Other key informants described more nuanced gender biases but still saw a lack of programs to specifically address issues of women in academic medicine. Several of the respondents expressed a sense that the institutional climate was ''neutral'' for women faculty, that gender was not an issue and required no special attention. For some key informants, there was a sense that gender-equity issues were resolved and that the institution was moving on to other issues. Some no longer saw the need to monitor the gender environment, with one example of not including gender questions on a faculty climate survey. In contrast, other key informants recognized the need for more to be accomplished and reported the lack of resources for any meaningful interventions to be made in the current economic climate of academic medicine. Two respondents described a positive climate for women, with an ever-increasing number of women in leadership positions and significant progress. Another concept that emerged from the transcripts was the belief that the greater numbers of women would inevitably bring gender equity. Only male informants reported this perspective, although some did describe the difficulties in upward mobility for women. Informants suggested several factors perceived to impact the gender climate at each institution. One was the department or division. Climate was perceived to vary greatly across specialties, as there was often no institutional oversight, and chairs were often not held accountable. The lack of programs for women, the stressful economic climate, lack of resources in academic medicine with the decrease in funding from the National Institutes of Health and other federal programs, and equating the number of women in senior positions as a proxy for a positive climate for all women were reported as factors that impeded progress for women in academic careers. --- Parity in rank and leadership by gender In describing the gender climate, some respondents noted that women were beginning to obtain more leadership roles. The benchmarks for success focused on achieving the rank of full professor and becoming department chairs and deans. Key informants almost uniformly recognized that more progress was needed and that this progress was increasingly challenging, as the issues to be dealt with were more subtle and difficult to resolve . Several respondents described a climate that differed significantly for women, de-pending on the academic rank of faculty. In the lower ranks, it was welcoming; in the senior ranks, it was isolating. Respondents noted increasing difficulty in achieving senior positions and described a slow pace of improvements. Most of the informants reported continual gender issues with rank inequity. This factor was also seen as impacting the ability to recruit women to the institution, as issues with promotion are also seen as a deterrent to attracting women faculty. Respondents noted that women were more frequently found in the nontenure clinical tracks rather than in the tenured research tracks with higher perceived prestige. Faculty in the educator and clinical tracks were not seen to be valued as highly and did not bring the same visibility to the institution. Respondents also reported having fewer women doctors of medicine in research tracks. Despite these concerns regarding promotion and leadership, key informants did note improvement in promotions for women. One institution described a way of educating women faculty about ways to achieve more rapid promotion and the success they had with this endeavor. Even with these advances, many of the informants felt strongly about the lack of women in leadership positions, that this was key to achieving parity for women in academic medicine, and that there are enormous systems and cultural barriers to achieving these ends. Retention and the ''leaky pipeline'' Respondents described both difficulties and progress for women in retention. Some described a ''leaky pipeline,'' with women leaving at the level of assistant professor. Informants noted national trends, with women not being successful in going from NIH Career Development Awards to independent Investigator Resources grants . Several informants acknowledged that their institutions did not have adequate data collection to assess gender issues in retention, noting that without tracking the careers of women, they could not develop appropriate interventions to improve promotion and retention. Two key informants were quite positive regarding the progress of their institutions, with ''I was chairing a gender equity survey.. There was an across-the-board increase for women, then significant increases for some specific women who were way out of whack.'' Professor 20 F GWIMS --- Differences at hire ''There is no posting of salaries but I can tell from personal experience that women-I would estimate that salaries are probably 30% lower than men being hired for comparable positions.. There is no strong voice advocating [for] the salaries of women.'' Professor 30 F GWIMS women gaining gender equity in rank if they accomplished the necessary criteria for promotion. These respondents also described the difficulty in finding women at higher ranks, noting that ''middle management''-those at the associate professor rank-were a forgotten group that was at risk for not remaining in academic medicine or achieving senior leadership positions. Respondents described efforts to improve promotion and retention by having women from each pathway represented on the promotions committee. --- Gender equity in compensation The key informants described salary equity as an important aspect of the gender climate, as well as the secrecy and lack of transparency around salary issues . The atmosphere this created was one of uncertainty: Informants perceived that women did not believe that there was gender equity in compensation or in the distribution of other academic resources. One key informant described a traditional gendernegative view: a male being the support of a family as a factor in compensation decisions, implying that women do not have the same need or merit for salary as men. Respondents also described differences by department in compensation, often with no institutional oversight. Exemplary quotes are provided in the tables. A few key informants described efforts to ameliorate the gender pay discrepancy at their institutions, with many women needing an upward adjustment of salary. Pay discrepancies were described even at hiring, contrary to the belief that the gender disparity is only for more senior faculty in their positions for some years. The respondents perceived salary equity as an important aspect of the value placed on women in academic medicine. --- Family responsibilities and work-life balance Respondents indicated that women still contended with issues around bearing and rearing children, the timing in their careers to have families, and the impact on their careers, especially in early academic appointments . Key informants reported that women were seen as the main family caregiver in certain departments and by some chairs. Respondents perceived that face time in academic medicine was still the hallmark of dedication to an academic career rather than the results of the work completed or the quality of the medical care provided. The respondents described bias --- Years at institution Gender Role Childbearing ''There's still some lingering issues around women faculty and the reproductive issues.if they decide to have children, this.occurs right in the middle of.the first few years of their appointments. Then you have to make adjustments.taking care of children is always a bit of a negative-not a negative, but it works the hell out of you.. But we try to be sensitive to those things.'' against women who might want part-time work or work-life balance and the difficulty for women of attending to both family and work responsibilities. One respondent did not see any changes in her institution from the early 1990s in the process for negotiating maternity leave. Several respondents described the need for greater flexibility in changing academic tracks at various points of family life. One key informant reported institutional initiatives to better meet the needs of faculty with families, such as providing more child care, including emergency child care. --- Discussion We describe the observations of 44 senior faculty members on the gender climate at a representative sample of nearly 20% of American medical schools, including both public and private institutions in all geographic regions. We purposively sampled key informants who had an institutional role in addressing the issues for women and underrepresented minority faculty and with longevity at their institution in order to explore the current climate and changes at the institution in the prior 15 years. Our key informants reported on advances in the gender climate while reporting continued need for improvement. Five themes emerged: the broad spectrum of the overall gender climate, lack of parity in leadership, challenges in retention, lack of parity in compensation, and a disproportionate burden of child-care issues. Many key informants noted that women have entered leadership positions, such as department chairs and deans, which appear to be the benchmarks for success. Although some informants noted the gains that have been made, other informants described these gains as modest and not reaching gender parity in senior rank and position. They described examples of variations by department, with no institutional oversight and a relative lack of women in senior positions. The range of issues affecting women, which influenced their advancement and rank, were described as broad, including retention, equitable compensation, family responsibilities, and work-life balance. Recent literature mirrors many of our results. One study found that work and family-life factors served as obstacles to satisfaction and retention of women faculty, reflecting subtle gender bias at the intersection of work and family life. 3 Other studies have found that medical schools fail to create and/or sustain an accepting environment for women. 7 Individual disciplines have also documented issues with the gender environment. 11,12 Some studies suggest that academic medicine fails to provide support for both men and women faculty and that the current structures and economic environment are resulting in retention and promotion issues for all faculty. 13 However, research has shown that the hierarchical structure of academic medicine affects women more negatively than men, 14 as women traditionally thrive in a more egalitarian environment. In addition, the person at the top of the hierarchy is more frequently male, which can also affect women more negatively than men. 13 These factors can contribute to a negative gender climate with a significant impact on women faculty's work experience and retention in academic careers. 15 Many of the key respondents perceived slow progress, suggesting that efforts are stalled in improving the gender climate. There was also a perception among informants for the greater attrition of female than male faculty, especially at the assistant professor level, but several acknowledged a lack of data tracking retention. Data from the AAMC reveal that over a 10-year period, 44% of women left academic medicine compared to 38% of men. 16 A national cohort study of US graduates between 1998 and 2004 found that women were more likely than men to have held faculty appointments in academic medicine, but the numbers decreased between 1998 and 2004 for both genders. 17 Many of our key informants described more women faculty compared to men in clinical rather than research tracks. Although many institutions have created specific promotional criteria for educational and research faculty, a perception of greater value placed on research faculty continues. A number of respondents in our study perceived systematic gender inequity in academic medical salaries. The atmosphere of secrecy and lack of transparency they describe is concerning: that women tend not to believe that there is gender equity in compensation or in other academic resources. Our previous work from 15 years ago documented an $11,691 difference after adjustment for rank, specialty, hours of work per week, productivity , and institution. 2 More recent work continues to find these gaps across such specialties as ophthalmology, 17 emergency medicine, 18 and life sciences. 19 Recent work indicates that even at the junior investigator level, male faculty make on average $13,399 more than female faculty, after adjustment for specialty, rank, leadership position, publications, and amount of time in research. 1 This finding is concerning, as most respondents believed that the gender inequity is a legacy present only for more senior faculty, which implies ongoing current salary discrepancies. Women also tend to not investigate or ask for higher salaries. 20 Despite 20 years of data, equity in compensation has yet to be achieved. The intersection of work and family balance for many younger women faculty is a major contributing factor to a negative gender climate. It has been shown that women residents delay childbearing, and women signified a greater belief in the potential of pregnancy to threaten their careers than men did. 21 Buddeberg-Fischer found that any negative impact on career path and advancement was exacerbated by parenthood, especially for women, and that socially rooted gender stereotypes were concerning. 22 From the comments of key informants, there are continuing issues of work-life balance and often a lack of knowledge of institutional policies. 23 A study by Levine et al. looked at reasons why women leave academic medicine and found a disconnect between their own priorities and those of the dominant culture of academic medicine, which they perceived to be male-focused. They reported a lack of role models for combining career and family, frustrations with funding and work-life balance, and a noncollaborative institutional environment. 24 Mentoring women in multiple role management and planning was suggested as a means to increase retention and advancement of women in academic medicine. Addressing a necessary skill set earlier in training and initial faculty roles could be an important factor in the success of women faculty. 25 Our data are not new in content, but that is all the more reason that it is important. The descriptions of GWIMS and GDI representatives are from faculty who deal with these issues on a daily basis and have years of experience at their institutions. From our findings, we are concerned that there is complacency around the issues of women in academic medicine and a perception that gender issues have been addressed and are no longer a focus of attention. There is a continuing need to revisit the progress that has been made for women in academic medicine to retain and improve upon the current gains in gender equity. Our study has limitations. Although we have data from 23 medical schools, the data do not describe the climate for all women faculty. The opinions of the GWIMS and GDI institutional representatives and senior leadership may not reflect the breadth or consensus of the entire faculty, especially more junior faculty members' experience. We explored the content of the interviews, but we cannot estimate the prevalence of this content. However, our data reflect interviews at a representative sample of nearly 20% of all medical schools that represent all four AAMC geographic regions and are balanced for private/public status, and we did find the themes to be consistent and highly congruent. Our study also has significant strengths. The qualitative methods allow for a thoughtful description of the wide spectrum of the gender climate, and some of the descriptions reveal little progress. Our study is based on interviews from senior faculty with significant longevity at their institutions, providing them with a unique vantage point on the gender climate and its evolution over time. The recent literature supports many of the themes that are derived from our research: lack of equity in compensation 1 and continued issues at the juncture of family and work life. 6 --- Conclusions GWIMS and GDI representatives describe improvements for women in academic medicine as modest. Our study indicates that there has been some progress to improve the work climate for women in academic medicine, but it has been slow and has not yet resulted in equity. Neither data on gender inequities nor greater numbers of women in academic medicine have substantially changed the climate. Although there are examples from several academic medical centers of meaningful interventions to successfully address stereotype bias and organizational culture, 25,26 these have not been disseminated to influence change at other institutions. The needed change can occur only with strong leadership, making this a priority and putting sufficient resources in place to make it happen. Better mechanisms to track the careers of women in academic medicine and the reasons why they leave academic medicine would be extremely valuable. There needs to be greater institutional oversight of advancement, compensation, and the overall gender climate for women. Senior leaders at the AAMC and the Liaison Committee on Medical Education should emphasize the importance of these issues and enforce this as an integral part of medical school accreditation. Improving the climate in academic medicine for women improves medical academe for all faculty members. --- Author Disclosure Statement No competing financial interests exist.
Background: Women have entered academic medicine in significant numbers for 4 decades and now comprise 20% of full-time faculty. Despite this, women have not reached senior positions in parity with men. We sought to explore the gender climate in academic medicine as perceived by representatives to the Association of American Medical Colleges (AAMC) Group on Women in Medicine and Science (GWIMS) and Group on Diversity and Inclusion (GDI). Methods: We conducted a qualitative analysis of semistructured telephone interviews with GWIMS and GDI representatives and other senior leaders at 24 randomly selected medical schools of the 1995 National Faculty Study. All were in the continental United States, balanced for public/private status and AAMC geographic region. Interviews were audiotaped, transcribed, and organized into content areas before an inductive thematic analysis was conducted. Themes that were expressed by multiple informants were studied for patterns of association. Results: Five themes were identified: (1) a perceived wide spectrum in gender climate; (2) lack of parity in rank and leadership by gender; (3) lack of retention of women in academic medicine (the ''leaky pipeline''); (4) lack of gender equity in compensation; and (5) a disproportionate burden of family responsibilities and work-life balance on women's career progression. Conclusions: Key informants described improvements in the climate of academic medicine for women as modest. Medical schools were noted to vary by department in the gender experience of women, often with no institutional oversight. Our findings speak to the need for systematic review by medical schools and by accrediting organizations to achieve gender equity in academic medicine.
Introduction Cervical cancer is the fourth most common woman's cancer worldwide and one of the most successfully treatable if detected early. Most cervical cancer cases are caused by the sexually transmitted infection Human Papilloma Virus type 16 and/or 18. Currently, major health inequalities surround the utilisation of cervical cancer screening services globally [1]. It is widely recognised that low-and middle-income countries have poor cervical cancer screening rate [2]. Recent estimates suggest that 84% of women aged 30-49 years living in high-income countries have been screened for cervical cancer ever in their lifetime, compared with 48% in upper middle, 9% lower-middle income countries [3]. In recent years The World Health Organisation have adapted screening recommendations to include HPV DNA testing in woman aged 30, with regular screening every 5 to 10 years [4]. However, a recent study showed that only 48 countries, the majority of whom are high or upper-middle income countries, have adopted or are planning to adopt HPV-based screening [3]. The primary cervical cancer screening test for LMICs is the opportunistic 'Pap smear test' involving visual cervix inspection to identify both precancerous cell changes caused by HPV and early-stage cancer development [4,5]. Despite relying on opportunistic methods that are often unreliable, LMICs have poorer coverage due to weaker health infrastructure and lower screening participation. To make matters worse, only 49% of low and lower-middle income countries have official recommendations to screen for cervical malignancy. Without these early detection methods and recommendations, the World Health Organisation's cervical cancer elimination strategy for 2030-70% of girls screened by age 35-will be impossible [6]. Poor screening is also apparent in the low-and middle-income country Jordan, where females comprise 49.5% of the predominantly young 10.2 million population. Of the female population, 62% are aged between 15-65 years old [7,8]. Jordan's crude incidence rate of cervical cancer is 2.3 per 100,000 women [9] with an estimated 277 new cases in 2020 [10], however, this incidence rate may be unreliable due to incomplete registration. Furthermore, the lack of awareness and access amongst Jordanian women to the costly HPV vaccine, an intervention to avert the development of cervical cancer [11,12] may be associated with higher death rates. Unlike several high-income countries, Jordan has no structured national screening programme and instead, screening is provided in public practice to those aged 25-35 free of charge by patient demand or opportunistically at appointments. This service does not actively issue invitations and lacks quality assurance allowing monitoring and evaluation of impact [13]. Conflicting Jordanian sourced evidence suggests screening from the age of 21, or from the initiation of sexual intercourse [14,15], however, a cancer centre in the Capital city Amman recommends screening from three years post first sexual intercourse [16]. The lack of recommendations and incoherent practices result in low cervical cancer screening rates. Other barriers to accessing screening in Jordan include a lack of encouragement from healthcare providers, a preference for female healthcare staff and limited health education and promotion [7]. Furthermore, perceived barriers may include embarrassment, fear, or pain [17]. One emerging factor to consider is the association with intimate partner violence , defined by the WHO as 'any behaviour that causes physical, psychological or sexual harm to those in the relationship' [18]. IPV is a global health concern, notably in Jordan, with a high prevalence and acceptance within a traditionally patriarchal society where males and women living with extended family or rurally are more accepting towards this behaviour [19]. Aggravation of stress and depressive symptoms brought on by this form of violence may impact lifestyle changes such as smoking [20] which is an identified risk factor for the development of cervical cancer cells. IPV is also linked to high-risk sexual behaviours, including noncondom usage and exposure to sexually transmitted diseases which can also play a role in the aetiology of cervical cancer [21][22][23]. Furthermore, studies show that victims of IPV are diagnosed with cervical cancer at a younger age than the general population of women who have not been subject to IPV [24]. Alongside IPV having been linked to adverse health outcomes including cervical cancer, it may also be associated with poor use of health services including cancer screening. Woman may be faced with the physical barrier of being unable to access care by their partners who control many aspects of their life and wellbeing. IPV related barriers to accessing care also include fear of flashbacks, pain, mistrust, or embarrassment associated with male healthcare providers [25,26]. Most IPV research focuses on the health impact of physical and/or sexual violence, although recent research increasingly focuses on emotional or psychological abuse [27,28]. In Jordan, IPV has been shown to interfere with decisions surrounding modern contraceptive use and termination of pregnancy [29]. But no study has looked at the association of IPV with cervical cancer screening. Global literature suggests that IPV is associated with severe short and longterm psychological and physical health consequences, significantly impacting a woman's health outcomes and healthcare access. Hence his project aims to address gaps in research to better understand if IPV is identified as a predictor of cervical cancer screening in Jordan. This is the first study of its kind to use this nationwide data in this context, with the ultimate aim to provide recent data that can be used in future to assist policymakers when addressing historically low cervical cancer screening rates. --- Material and methods --- Data source and sampling This secondary data analysis used Jordan's nationally representative and anonymised crosssectional 2017-2018 Demographic Health Survey which was accessed from the official DHS website [14]. A two-stage stratified sample was selected from the 2015 census, each governorate separated into urban and rural areas, with 26 sampling strata constructed in total. 970 clusters were selected at the first stage with households listed in all selected clusters. During stage two, 20 households per cluster were selected: all ever-married women aged 15-49 identified as residents of the selected households or visitors of these the night before the survey were eligible. This was translated into Arabic from the original English questionnaire design, with only one eligible woman per household selected. 14,870 eligible women were selected with 91.7% participating. All answered cervical screening awareness and uptake questions, however, only 6852 completed the IPV module as this required complete privacy. Of these, we selected only 6679 women aged 20 and above to capture Jordan's suggested screening age . --- IPV measure IPV included physical, emotional, and sexual spousal violence. Categories were recoded, combining questions detailed in the DHS survey determining exposure to violence within 12 months of the interview [14]. Physical violence was constructed into a binary variable after recoding answers to separate questions 'Did your husband ever do this to you' with options such as 'Push you' or 'shake you'. Answers were recoded so that 'no' included the original values 'never' and 'yes, but not in last 12m' and the recoded 'yes' included 'often' 'sometimes' and 'yes in last 12m'. The same process applied to emotional violence determined by answers such as 'say or do something to humiliate you in front of others?'. Sexual Violence was based on 'has your husband ever physically forced you into sexual intercourse?' with answers recoded in the same format, ensuring the violence was captured within 12 months of interview. Further sensitivity analysis was conducted taking into consideration 'ever' exposure to intimate partner violence to include IPV that may have occurred any time out with the 12-month interview, increasing robustness of results. --- Ethical consideration Data underlying the results was requested and obtained from the DHS website repository directly via a 300-word application explaining project aims. As this data was anonymised, an ethical approval was unnecessary for use, however ethical approval of the original Jordan DHS survey protocol, including biomarker collection of the DHS data, was obtained by the Department of Statistics and was reviewed, and approved by the international coaching Federation Institutional Review Board . The ICF IRB has strict guidelines to ensure that the original survey provided informed consent, with confidentiality ad privacy strictly adhered to. Interview or biomarker collection is carried only when the participants orally approve the narrated informed consent statement by the data collecting team. --- Outcome Awareness of the Pap smear was determined by if a woman has 'Heard' or 'knew' about the Pap test. This outcome variable 'Heard of Pap smear?' was binary . Only women answering yes were asked the further question: 'Ever had a Pap smear?' Variables included in previous research such as perceived benefits of screening were excluded from Jordan's DHS questionnaire [15]. --- Covariates Independent variables were chosen based on the existing literature on cervical screening and included age, residence, marital status, governates, ethnicity, highest education level, wealth quintile, health insurance coverage and primary healthcare decision-maker. Governates were recoded into three regions with 1 = Northern , 2 = Central and 3 = Southern . Ethnicity was recoded from Jordanian, Syrian, Egyptian, Iraqi, Arab and non-Arab into three choices . --- Statistical analysis Statistical analysis was completed using Stata version 16 with significance set at p<0.05. Categorical independent variables were analysed using the appropriate x 2 squared tests of independence for the association. Only women illustrating the awareness of Pap smear were asked the follow-up screening question assuming that those that did not know about Pap tests would not have undergone screening. This assumption could create bias due to systemic differences between women with and without the awareness of the Pap smear test. Hence, Heckman's two-stage probit model was applied to adjust for this section bias [30]. In Heckman's two-stage model, awareness of Pap Smear test was the outcome variable for the selection stage. Our participants Pap smear test was the outcome variable for the outcome stage of the model. By selecting this two-stage model, we were able to address the potential bias in the sample selection that might affect our analytical model. The Heckman's two-stage probit selection model estimated the probability of having the Pap smear awareness by controlling for all three domains of IPV, age, place of residence, region, ethnicity, education, wealth, presence of health insurance and women's autonomy in making decisions on their health care. The outcome model estimates the probability of undergoing a Pap smear test by adding similar variables except for the health insurance variable in the selection model. The decision to add the health insurance variable for the selection equation rather than the outcome equation is driven by the unadjusted logistic regression analysis and the recommendation that the section model should have at least one additional explanatory variable compared to the outcome model [31]. As both the awareness and screening variables are dichotomous, we employed probit models both in the selection and outcome models. Multicollinearity between IPV variables was assessed as well as between covariates by establishing the variance inflation factor , with no problems encountered as no value exceeded 1.5. The final stage involved weighting the data for domestic violence, with proportions updated accordingly, and original frequencies left unadjusted. --- Results Population characteristics are displayed in Table 1; 65% had awareness of the Pap smear test and 15.8% had ever been screened. Screening and awareness by region in Jordan are displayed in Figs 2 and3. The population comprised 86.5% Jordanian, 8.9% Syrian and 4.7% other nationalities. 58.5% were covered by health insurance, with 18.7% in the poorest wealth index compared with 17.5% in the richest. A small proportion had no education with 53.7% having minimum level of secondary education. 23.6% of women made healthcare decisions independently, with 68.6% involving a husband and 7.7% of decisions were made entirely by someone else. Table 2 shows cervical screening awareness and undertaking of cervical screening among respondents by their characteristics. All socio-economic variables were significantly associated with awareness of Pap smear tests, except residence and marital status where p>0.05 . Furthermore, all variables other than marital status and health insurance were significantly associated with cancer screening utilisation . The largest proportion of women with awareness and history of a Pap smear test were those aged 40-44 and 45-49 respectively. Ethnicity played an important factor: 16.7% of Jordanians were tested, compared with 9.1% of Syrians. Highest uptake of Pap smear tests was attributed to a secondary level of education , compared with 5.7% of women with no education . The wealthiest index had higher smear test levels compared to 10.3% in the poorest index . In the group making independent healthcare decisions, 70.0% had test awareness and 16.5% had test experience. Healthcare decisions made solely by someone else resulted in lower levels of awareness and test experience . From the women with privacy to answer the IPV module, 180 were found to be victims of sexual violence, 691 of physical violence and 935 of emotional violence. A further question asked women to rate how afraid they were of their husband. This uncovered that 8.9% of women were afraid most of the time, a concerning 51.1% were afraid sometimes, and 40% were never afraid. Furthermore, a small section of women answered questions on help-seeking behaviours from IPV where only 19.6% admitted to seeking help from someone about their situation, with the remaining 80.4% of women not seeking any help at all. Table 3 demonstrates that a statistically significant association was found between sexual violence and the awareness of cervical cancer screening , however, this did not influence whether women had ever been tested . Although emotional violence was not statistically significant in influencing awareness , a significant association was shown with the utilisation of screening as p = 0.016 . The primary focus of this study aimed to determine if IPV was a predictor of cervical cancer screening. Our Heckman Probit selection model results show sexual violence to be a predictor of awareness of cervical screening. Women subjected to this were significantly less likely to admit to awareness of the test compared to their non-abused counterparts . However, this had no impact on access to screening services in the outcome model . Emotional violence was not a statistically significant predictor of awareness of the test but had a highly significant paradoxical association with women's screening status . Our outcome model results conclude that Jordanian women were more likely to have undergone cervical cancer screening if subjected to emotional violence compared to those who were not emotionally abused, raising various questions. Physical violence was not a predictor of screening in either model. It is important to note that Table 5 included violence within 12 months of the interview; however, analysis was replicated to include 'ever' having exposure to IPV, as this could occur at any time in a woman's life out with the interview timeframe. This analysis showed very similar results compared to our main model; hence they are not included in the table. Logistic regression analyses carried out independently for awareness and screening outcomes showed similar results as the Heckman Probit model. Finally, a sensitivity analysis was also carried out to determine any differences in awareness and screening results between the original full sample of women and those able to answer questions on IPV . This confirmed no statistically significant difference between the two groups, therefore increasing the robustness of results. Results of the selection model showed that women were less likely have awareness of screening if their healthcare decisions were made by someone else compared to independently . Residence was insignificant, implying those living rurally were not disadvantaged compared to urban settings. Health insurance was shown to be a predictor of awareness. Syrian women were less likely to have heard of cervical cancer screening compared to Jordanian women as was also the case for 'other nationalities' . However, ethnicity was not a predictor of screening as shown in the outcome --- Discussion This paper is the first of its kind to use national-level data in this context to identify associations between cervical cancer screening and IPV in Jordan, a country where lower level of awareness and screening rates were detected. The results concluded that women subjected to sexual violence were less likely to admit to having awareness of a Pap smear test; however, this did not impact screening rates. Furthermore, victims of emotional violence, paradoxically, were more likely to be screened for cervical cancer than non-victims. No association between physical violence and cervical cancer screening was found. Low screening rates are a common finding in Arab countries, widely observed due to limited resources directed towards the development of comprehensive cervical cancer screening programmes. No Arab country has a call-and-recall invitation system similar to Europe [32] which is proven to reduce cervical cancer mortality [33,34]. For example, a study from Iraq found that only 32.4% of women had adequate awareness and 12.6% of women had been screened [35]. Similarly, a Saudi Arabian study identified that only 33.4% of women had been screened for cervical cancer [36] and a study from Kuwait found only 52% of women to have adequate awareness with 23.8% of women screened [37]. These findings reflect the common misconception that screening is culturally unacceptable for Muslim women as Islam prohibits premarital sexual intercourse, therefore in this conservative culture, HPV associated with promiscuity is not considered a risk factor [7,[38][39][40][41]. The complex relationship between the association of sexual violence with lower level of awareness of cervical screening may be explained by the admission of screening knowledge acting as a precursor to suggestions of health services engagement. Despite sexual violence influencing awareness of the Pap smear test, our study did not show any association between exposure to sexual violence and actual cervical cancer screening rates. The rationality behind this relationship is hard to determine due to limited questions in the DHS survey. This was an unexpected finding, despite the evidence that sexual violence can act as a barrier to healthcare access and, subsequently, inadequate screening [42][43][44]. However, similar to our findings, one American study described that victims of sexual violence under 40 years old did not report statistically different cervical cancer screening rates compared to the general population. Victim status may not play a part as screening can occur opportunistically during family planning services in this reproductive age group [45]. Furthermore, another study found no difference in sexual and physical violence victims in receipt of cervical cancer screening compared to the general population [41]. A recent meta-analysis of 36 studies concluded that all three forms of IPV again were not related to cancer screening practices but, worryingly, were significantly associated with the incidence of abnormal Pap smear test results and, therefore, greater odds of cervical cancer [46]. Also, studies showed that for victims of sexual violence, the Pap smear test is an invasive procedure identified as a re-traumatising experience uncovering evidence of assault a survivor is trying to hide, with an expectation of pain and associated fear or embarrassment [45,47,48]. Therefore, women may avoid the test despite having a higher risk for cervical cancer due to IPV exposure. Besides, women who undergo sexual violence in Jordan may experience it as a teenager due to their young marital age, further limiting their agency to accessing information on sexual health, including cervical cancer screening [49]. Child brides are less likely to access healthcare due to decreased agency and bargaining power [50]. The legal age of marriage in Jordan is 18; however, children aged 15-17 can be married under exceptional circumstances. UNICEF express concerns that child marriage is increasing, especially within Syrian refugee communities, with a reported 36% of all Syrian marriages in Jordan involving children [51,52]. In our study, the odds of having awareness of cervical cancer screening and undergoing screening increased with age from 25 onwards. As well as having lower level of awareness of screening, a decade of trends shows that younger brides have an increased risk of IPV, which consequently impacts on their autonomy [51,52]. Physical violence was not shown to be statistically significant in either model. This was inconsistent with a Brazilian study's findings demonstrating an association between physical IPV and inadequate cervical cancer screening [48]. Similarly, a second Brazilian study showed an association between physical and sexual IPV and lower rates of this screening [53]. Emotional violence frequently occurs in Jordan's patriarchal society where male privilege leads to intimidation and dominance [54,55]. It is often reported that emotional violence can be insidious, resulting in chronic suffering, leaving a woman vulnerable, anxious and with low self-esteem [51]. In our study, victims of emotional violence were paradoxically more likely to be screened for cervical cancer than those participants who were not exposed to emotional abuse. We argue that the results, rather than suggesting that emotional violence is beneficial for screening, indicate a lack of reliability of questions as well as the response of the participants. These controversial findings were also reported by a few other studies. For instance, a study focusing on women experiencing emotional abuse were shown to have more frequent consultations in a North of England hospital. In particular, those who underwent emotional abuse had more worries about smear abnormality and cancer than their non-abused counterparts [56]. While the authors did not want to speculate the reasons for this association, they suggested that the significant association of emotional abuse with higher levels of anxiety could result in physical symptoms. They cited a study [57] that suggested that majority of women who experienced emotional abuse had physical symptoms including headaches, chronic pain and vaginal bleeding. A study in Vienna found that women who experience all three modes of violence reported higher odds of gynaecological symptoms and therefore have more visits to healthcare providers [58]. The authors argued that worry about health was mediating the association between violence and gynaecological symptoms. It is likely that in our sample, women experiencing emotional violence were more likely to act on their worries to visit health care for screening. One study found that victims of sexual and physical abuse aged 40 and above were 87% less likely to have had Pap smears compared to those who had been emotionally abused [45]. Contrary to this study, we did not compare cervical cancer screening rates of victims of emotional abuse directly with those experiencing sexual and/or physical abuse. However, both these studies suggest that women experiencing emotional violence have higher levels of screening compared to either those that did not experience emotional abuse or compared to those experiencing sexual and/or physical abuse. However, the authors were unable to explain why victims of emotional abuse had the highest rates of cervical cancer screening compared to those experienced sexual and/or physical abuse. While we also find this association complex, we explored the reasons by carrying out further analysis on justification of various domains of abuse. Our analysis found that 87% of emotional violence survivors said that beating is not justified if the wife argues with her husband. This was higher than those who experienced physical violence, where 84% said it was not justified, and 82% for sexual violence survivors. This may strengthen the hypothesis that victims of emotional abuse might have stronger autonomy in this context, and therefore are able to speak up and present more frequently to healthcare providers. The other plausible explanation could be related to the measurement of various modes of IPV and the importance given to them. Often several researchers combined all domains of IPV rather than looking at various domains individually [21,59,60]. This has limited the ability to compare our study with the findings measures in a singular format. Often psychological violence has been ignored in LMIC research. This could mean that there might still be challenges to the measurement of this component in countries where research on psychological or emotional abuse is in the nascent stages [27]. Alternatively, women might not hide psychological abuse and might hide sexual and physical violence due to the stigma associated with them. Besides, it is hard to understand the association and the pathways without measuring the severity of emotional abuse in relation to the frequency. There is emerging research on 'reproductive coercion' within violent partner relationships where men control a woman's reproductive health access and decisions [61,62]. Women may feel coerced into making healthcare decisions about important family planning which may lead to an unwanted termination of pregnancy or family size [63]. This is validated in our findings which demonstrated that if a women's healthcare decisions were made entirely by the husband/partner/someone else, she was less likely to have awareness of cervical cancer screening. Ethnicity played a role in the awareness of the Pap smear test; Syrian women were less likely to have awareness of screening compared to their Jordanian counterparts. It is important to consider that Jordan currently hosts 670,000 Syrian refugees-80% are under the poverty line [64]. These findings may reflect free universal refugee health coverage ending in 2014 due to a decline in physicians per person resulting from the influx of Syrian refugees [65]. Our findings found women with health insurance coverage and increasing wealth quintile were associated with higher odds of awareness of cervical cancer screening. Therefore, Syrian refugees disproportionately face barriers to accessing healthcare [66]. It was unclear how many of our sample had refugee status; however, 3.9% used refugee health insurance. It is universally recognised that women living with intimate partner violence are a vulnerable subgroup that must not be overlooked. It should be acknowledged that the situation may be far worse than this study reports, as women without autonomy were unlikely to answer questions that may endanger them. Therefore, a large subgroup of women are not accounted for. Societal restrictions imposed by the Covid19 pandemic may have exacerbated this situation further. It has been suggested in previous research that Jordanian women may fail to disclose IPV in the absence of current injuries and that Jordanian medical facility staff require training in effective IPV screening methods [67]. Jordan's health system complexity, with health insurance providers ranging from the United Nations Refugee Welfare Association to private insurance [14], means creating a standard training framework is challenging, yet a priority. --- Limitations The sensitive nature of the questions introduces uncertainty in the accuracy of IPV representation in the sample. Women may feel worried about answering honestly, opening wider questions about capturing delicate information in qualitative surveys. These under-reporting challenges were outside of the study's control. The DHS survey questions were not designed for cervical cancer screening and subsequently did not provide detailed information on full Pap smear history, instead only if they have 'ever' been screened. However, it allowed a general nation-level estimate of the current situation. --- Implications for future research As previously mentioned, the WHO now recommends the use of HPV testing over the conventional Pap smear test that is standard practice in Jordan and many other non-European countries [68]. Most recently in 2021, this has been adapted to involve self-sampling as an approach to target women who may not engage with clinician-based interventions and is particularly beneficial in low-resource settings where there are high populations of women who have not been screened [69,70]. Women who have faced barriers to screening, such as fear, pain, embarrassment, or avoidance due to previous sexual or physical violence, may benefit from this new method of selfsampling as it puts them in control of the process. For women who are subject to IPV and experience the constraints of controlled access to healthcare by their partner, this option may theoretically act as a more discrete method of testing; however, it would still require access to health services to collect and return the sample. Our research has highlighted the necessity to improve public health promotion of cervical cancer screening amongst the population of Jordan, alongside targeting the women who are vulnerable to underscreening. A suggested approach may begin with appropriate education of health care workers in Jordan, as studies have identified discrepancies in awareness and understanding of cervical cancer screening tests available. One study found that only half of healthcare professionals were considered aware of cervical cancer screening [39]. Another Jordanian study found 20% of ObGyn clinicians did not think HPV was involved in cervical cancer aetiology, and more than half voiced opinions that the Pap Smear was not the most cost-effective public health tool for cancer screening [40]. Therefore, we implore the Jordanian government to address and identify these gaps in awareness and understanding within both the Jordanian population and healthcare workers surrounding the most appropriate and cost-effective method of screening. --- Conclusion Our study examined the association between IPV and cervical cancer, the first to our knowledge to use nationally representative data for Jordan for this purpose. Our research concludes that while sexual violence is associated with cervical cancer screening awareness, emotional violence is associated with increased rates of screening in Jordan, an important and complex finding warranting further research. Based on this, we recommend developing qualitative methods to capture the full population of women at risk of IPV and tailored cervical cancer questions to understand the situation's complexity. We also suggest that Jordanian healthcare professionals improve the integration of reproductive health services with IPV screening, ensuring vulnerable women are identified and safeguarded. --- This study has used publicly available, secondary data. The datasets can be accessed on request at measuredhs.com. The data are available for download free of charge once the registration is complete and the data request is approved. The data was accessed via the official DHS website' with a reference provided: Department of Health and Statistics. The 2017-18 Jordan Population and Family Health Survey 2017/ 18. [internet] Jordan [cited: 18 May 22] Available from: https://dhsprogram.com/pubs/pdf/FR346/ FR346.pdf. awareness. The paper further sheds light on the paradoxical association between emotional violence and screening. It is acknowledged this situation may be far worse than reported, as women without autonomy were unlikely to answer IPV questions that may endanger themtargeted surveys on cervical cancer screening warrant further investigation. --- --- Data curation: Grace Urquhart. Formal analysis: Grace Urquhart, Aravinda Meera Guntupalli. Investigation: Grace Urquhart, Aravinda Meera Guntupalli. Methodology: Grace Urquhart, Aravinda Meera Guntupalli. Software: Grace Urquhart, Aravinda Meera Guntupalli. Supervision: Sara J. Maclennan, Aravinda Meera Guntupalli. Validation: Aravinda Meera Guntupalli. Visualization: Grace Urquhart. Writing -original draft: Grace Urquhart, Aravinda Meera Guntupalli. Writing -review & editing: Grace Urquhart, Sara J. Maclennan, Aravinda Meera Guntupalli.
Major health inequalities exist surrounding the utilisation of cervical cancer screening services globally. Jordan, a low-and middle-income country, has poor screening rates (15.8%), with barriers to accessing services, including lack of education. Emerging studies demonstrate that intimate partner violence (IPV) impacts reproductive health decisions. As a large proportion of Jordanian women have reported experiencing IPV, this study examines the association between IPV and cervical cancer screening in Jordan, the first of its kind using national-level data.Using Jordan's Demographic Health Survey 2017-18, cervical cancer screening awareness and self-reported screening were estimated in participants who answered questions on IPV (n = 6679). After applying sample weights, Heckman's two-stage probit model determined the association of awareness and utilisation of cervical cancer screening with experience of IPV, adjusting for the socio-economic factors.Of the women with privacy to answer the IPV module, 180 (3.4%) were found to be victims of sexual violence, 691 of physical violence (12.6%) and 935 (16.2%) of emotional violence. Women subjected to sexual violence were less likely to admit to having awareness of a Pap smear test; however, this did not impact screening rates. Victims of emotional violence were more likely to be screened than non-victims. No association between physical violence and cervical cancer screening was found.A significant association between cervical screening awareness and IPV demonstrates that cancer screening policies must consider IPV among women to improve screening
Introduction One of the most important and visible social problems today is the increase in the aging global population. As life expectancy increases, the number of older people with disabilities at risk of chronic illness or injury also inevitably increases, bringing significant societal challenges. Physical dysfunction and chronic illness reduce the quality of life of vulnerable older populations, leading to extensive demand for medical and care services [1]. Therefore, improving care conditions for disabled older people has become an important topic in academia. In China, data from the Second China Sample Survey of Disabled Persons showed that 44.16 million people with disabilities were aged over 60, accounting for 53.24% of all disabled people [2]. As the aging of the population increases, the scale and growth rate of the disabled older population in China in the future will be significant. The prevalence of disabled people over 60 is predicted to increase by more than 7 million every 5 years, reaching 103 million by 2050, 2.3 times the number in 2006 [3]. More importantly, 70.42 % of older persons with disabilities live in rural areas, totaling about 31.1 million in 2006 [2]. These older people have multiple social identities such as countryfolk, the disabled, and the aged, and the most vulnerable face serious economic pressure as well as health care needs [4][5][6][7]. However, their special care needs and barriers to access have been somewhat neglected in existing research, as previous studies concerning the disabled elderly have usually been focused on urban subjects and there is scarce relevant extant research on the accessibility barriers faced by the disabled elderly in rural areas. Most studies concentrate on describing the phenomenon of accessibility barriersthe high but unmet demands for social services by rural older people with disabilities. These demands include a customary and safe environment, adequate income for living, support from family members and friends, and access to health care services [8]. However, satisfying these demands is significantly more difficult for the disabled elderly in rural China than for their urban counterparts, particularly in health care services. Research has highlighted poor living conditions and self-care ability, the risk of disease, and the severe aging trend as well as poor social interaction and poor care services for disabled older people in rural areas [9,10]. Dependence on family care is high, with low rates of institutional welfare support and weak informal support from society [11,12]. The Fourth Sample Survey of Living Conditions of the Elderly in Urban and Rural China found that 89.43% of the rural disabled elderly requiring care were receiving it, but the major providers of the care were mostly family members , among whom nearly half were spouses, 28.64% sons, 10.08% daughters-in-law, and 10.35% daughters. Less than 1% of people received professional care from nursing institutions [13]. Family care cannot meet diverse health needs due to limited professional levels, fewer services, and lower service frequency [14][15][16]. In such situations, "muddling along" generally describes the mentality of rural older people with disabilities [17]. In addition, their problems are compounded by the social issues of aging, family miniaturization, and rural hollowing-out, producing families with multiple disadvantaged characteristics such as multiple disabled family members, two-generation disabled elderly families, empty nest families with disabled older people, disabled elderly families with the loss of a child, and families with both elderly and disabled people, thus leading to increasingly complex barriers for older people with disabilities to obtain care services [6,18]. A small group of scholars have tried to analyze the causes of accessibility barriers, mainly from three interpretative perspectives. The geographical location approach sees cost and traffic as the major reasons, as health infrastructure and professional care services are concentrated in cities far away from rural areas [10,19]. The social capital approach looks at the lack of social support networks in rural areas to share and exchange information on public services [20]. Some scholars have focused on the cultural perspective, seeing the conflict between Chinese traditional culture and the rapid expansion of the market as producing negative consequences in family care [21]. It is crucial to study multi-dimensional accessibility barriers to care services for the rural elderly with disabilities, as these barriers are closely related to the improvement of their quality of life. The existing literature has shown that, in rural China, the elderly with disabilities face severe obstacles to obtaining care services. However, their special needs are significantly neglected in both policy practice and academic research. Additionally, most of the relevant reviews describe the phenomenon in a fragmentary way, without further exploration of the reasons. Moreover, they simply summarize the institutionalized care services provided by the state and the non-institutionalized care services provided by the traditional family, an approach that ignores the multiple subjects of the service supply and leads to a narrow understanding of the care supply-demand imbalance. To better understand the multi-dimensional factors that create the barriers, this paper focuses on the service providers, analyzing the issue from the perspective of welfare pluralism. After the welfare crisis of the 1970s, some scholars realized that the sources of social welfare should be diversified. Rose first put forward the theory of a welfare mix, identifying that the supply of social welfare comes mainly from three sectors-the state, the market, and households [22]. Johnson joined the volunteer sector on the basis of Rose's theory, identifying four sectors of welfare provision, thus forming an overall framework of welfare provision [23]. The concept of welfare pluralism has received widespread attention in the field of social policy and has been used as a guideline in the formulation and implementation of old-age service policy and long-term care insurance policy in China [24,25]. To explore the reasons behind accessibility barriers, this paper adopts the useful perspective of the welfare pluralist and gives a comprehensive framework to capture various aspects of institutional, social, cultural, and other macro factors behind accessibility barriers, avoiding individual analysis of rural older people with disabilities. Using a qualitative method, this research interviewed 13 rural elderly with disabilities about their experiences and feelings in mainland China. By analyzing these stories through the perspective of the welfare pluralist, we found diverse obstacles coming from the state, market, NGOs, volunteers, households, and the community. --- Materials and Methods --- Design Studies into aging and disability should provide a rich empirical examination of how older people with disabilities experience barriers and impairments. At present, much of the research on the rural elderly with disabilities has been based on the quantitative method; however, a fixed questionnaire can often ignore the indelible individual experience of the interviewees, whereas a qualitative method can provide additional insight into this issue. To obtain more vivid and richer knowledge, we adopt an inductive approach in this paper, using a qualitative method to search the subjective care service experiences of the elderly with disabilities in rural China through the collection of in-depth interviews. An in-depth interview is a type of unstructured, direct, deep, and one-to-one interview, which is an appropriate way to collect data on the potential motives, experiences, attitudes, and emotions of the respondents regarding a certain issue [26]. In this study, we highlighted the personal stories of rural elderly with disabilities, their personal experiences, and their reflections on interactions with diverse service providers. The analysis of behavioral characteristics and welfare experience in specific fields can answer the question of how policy providers can cause severe accessibility barriers in care services. The study was conducted in Jinan, the capital of Shandong Province, an eastern coastal city in China. In 2020, there were 154,902 citizens with registered disabilities in Jinan, of which 74,956 were elderly with disabilities, accounting for 48.4% of the total. Among the elderly with disabilities, 56,088 were registered as rural residents, accounting for 74.83% of the disabled elderly. Because of physical, intellectual, and disease-related obstacles, most were facing severe demand for care services. --- --- Data Collection The interviews took place in January and February 2019. The second author completed all one-on-one in-depth interviews in Mandarin Chinese or in dialect, each ranging from 30 min to 1 h. Before the interviews, we contacted the participants and provided general information related to our research, including the purpose, subjects, process, duration of the study, and the research schedule. After obtaining the consent of the interviewees, the interviewers recorded the dialogues through on-site audio-recording and note-taking to collect raw data. The notes included the core contents of the interview, the physical and mental state of the interviewee, as well as behaviors and attitudes observed in the interview. All interviews took place in the participants' homes. The first author and second author were researchers who had received training in qualitative methods and interview techniques. They designed the semi-structured interview schedule based on the purpose of the research. Interview questions looked at participants' experiences of obtaining care services from the state, market, non-governmental organizations , volunteers, households, and the community. Table 2 details the content of the interview schedule. --- Data Analysis After the interview, we converted the audio-recordings into Chinese, word by word, as well as the interview diary and notes, and selectively translated some information into English as required to present the findings. This research used a qualitative thematic analysis method to analyze the data [29]. Thematic analysis is a popularly implemented method of qualitative research. By applying this method, researchers can gain better and deeper understanding of the participant's attitude, vision, feelings, and thought reflections on care services from a dataset that has been collected from rural elderly with disabilities, and extract the core theme [30].The software package NVivo 10.0 was used for analysis, and the raw data were composed of the participants' narratives that we coded over 3 stages: We read the raw data repeatedly to familiarize ourselves with all the dimensions of the data, and then extracted meaningful statements to generate initial codes . Following the initial coding, we focused on the broader level analysis by collating all codes into themes . We then reviewed the themes to test whether they related to the initial codes. To ensure the reliability of the analysis, we returned the initial codes and theme codes to the respondents and invited suggestions for these codes. --- Ethical Considerations During the in-depth interview process, our research strictly followed the procedures of informed consent, non-harm, and confidentiality of participants. Before the interview, we explained the purpose and use of the interview as well as the recording requirements in detail, and all participants had signed the informed consent. Second, due to the relatively fragile physical and mental condition of rural older people with disabilities, as well as a tendency towards self-deprecation, the interviews were carried out after full discussion with the workers of village committees to protect the interviewees during the interview process. Third, for confidentiality, all identifying information was anonymous during transcription and translation. Any sensitive material from the interviews was technically processed. --- Results Using the perspective of welfare pluralism, we divided the factors of accessibility barriers to care services into four themes: the limited state, absent markets, absent NGOs and volunteers, and low-quality household and community care. --- Limited State The limited state refers to the severe restriction in the provision of various health services and care services, as compared with advanced welfare countries. The main barriers are insufficient resource investment, overly strict eligibility examinations, uneven distribution, and irregular implementation. Support resources from the state and government are seriously insufficient and not sustainable. The main form of support lies in short-term subsidies and donations. Of the social services related to medical health and care, only a serious disease pension is provided, with no more than half of the cost reimbursed-too little to solve the most severe problems of the rural disabled elderly and their families. The government says that work for the disabled includes medical rehabilitation, education, employment, long-term care, social security, and so on. But, in fact, children's rehabilitation, education, and employment are still the key work. Old people like me only have two low-level subsidies and a serious disease pension. But I am getting older and older, and we cannot recover. How am I going to live? While choosing welfare candidates eligible for a new policy of care service, the method of "doing subtraction" is often adopted to reduce the number of service users. For example, for home-based care the procedures are as follows: in addition to the regulated poverty line standard, the applicant is excluded if they have a son, a big house, a car, or if they use expensive household appliances of the same type. Such strict exclusions render only two to four rural disabled old people in a village eligible to qualify, and there are many disabled older people with urgent care needs who are ineligible. As the participants have demonstrated: Every year I apply for care assistance, but never succeed, because I have two sons. But what's the use of two sons? They have debts of more than a million, and everything in my house is taken away to pay off the debts. They can hardly live themselves; how can they care for me? [The village committee] says that I have sons, so I can't apply. But my eldest son is also disabled. It is me who takes care of him, not he who takes care of me. Furthermore, there is a huge gap between urban and rural disabled old people in the development of social service support. In urban areas, the elderly care service framework has been established with the "household as the core, community as the dependence, and professional institutions as the supplement" [29]. Old people with disabilities can enjoy day care centers, rehabilitation centers, dining halls for the elderly, door-to-door nursing, and other professional services, whereas in rural areas there is nothing. In practical implementation, affected by the level of economic development and policy radiation scope, day care centers and rehabilitation centers are merely a door plate and are virtually useless. Dining halls for the elderly and door-to-door nursing are still in discussion but have not been implemented. The welfare differences in the two areas are widening. We don't have a day care center. The rehabilitation center is in the sub-district office, which is too far to go. I have never gone there. The rural areas are different from the urban. There are too many disabled old people in the countryside, and the government can't take care of all of them. We have to depend on ourselves and our family, not the government. This disparity is further increased by the irregular implementation of street-level bureaucrats. Our investigation found that some street-level officers take social welfare for the disabled elderly as a way to obtain favors and realize personal interests to expand their relationship capital. That is, they give the limited social benefits to their family members, relatives, and friends. In turn, people in real need are unable to get the support they deserve, further squeezing the opportunities of those in need to obtain health welfare and care services. I don't believe in the government . . . I have never heard of this kind of care service. If there is such a service, it must have been secretly given to their own relatives. --- Absent Market The term absent market refers to the fact that the market does not provide care services for rural elderly persons with disabilities. Although with the rapid development of the Chinese market economy, the market plays an increasingly prominent role in the socialized care for disabled older population, this is true in urban areas only. In rural areas, there is no professional provision of care for senior citizens. The reasons are as follows: First are the economic limitations. Rural households tend to have low income and poor living standards. It is likely that there would be no potential customers for old-age care services charged at cost in rural areas. There is not a charged care service, we could not pay for the care even if there were. Eating expenses are a problem for us, who would be willing to spend money on care? Second are geographic restrictions. Rural areas are often remote with poor roads, resulting in higher costs in providing care services than in urban areas. This does not conform to the low-cost principles of the market. Hiring a caregiver? That's for the city, in the village, no one pays for that. No caregiver wants to come this far. Third are barriers of exclusion. On one hand, exclusion may result from the market for elderly service provision, as the market prefers customers with slight impairment rather than higher degrees of disability. On the other hand, it may result from the self-exclusion of disabled older people in rural areas. In Chinese society, referring to someone as disabled elderly often suggests the person is unable to do anything for themselves, and has negative connotations. This notion further influences negative attitudes toward care. In most cases, the root cause is self-exclusion, but it is often reflected through the market. In a care service, people would dislike me because I am dirty, messy, and poor. My own children dislike me when caring for me, not to speak of outsiders. Finally, there are obstacles of traditional ideas. In rural areas, the traditional idea of "raising children for old age" still exists with the belief that the responsibility of care of the elderly should be shouldered by their children. Consequently, paying for care services not only makes rural disabled elderly lose face, but also gives their children a bad reputation. Those who have no sons or daughters are taken care of by others, and arrangements are made to go to a nursing home. Otherwise, people will laugh at you. --- Absent NGOs and Volunteers As with the absent market, NGOs and volunteers are not found in rural areas. Care services are faced with the double barrier of difficulty of access by outside NGOs and volunteers, as well as difficulties in forming local NGOs and voluntary groups. It is difficult to encourage NGOs and voluntary groups to work in rural areas. NGOs tend to be located in cities and volunteers are usually college students, mainly providing services in urban areas. The countryside is often neglected due to its remote geographical location and loose housing distribution. During the interviews, it transpired that many elderly people with disabilities in rural areas had not even heard the words "volunteer", "nongovernmental organizations", or "free service". Most participants said that they had not been exposed to any welfare services offered by NGOs, nor had they received any help from volunteer teams. What is a volunteer? I don't look for them. I dare not trouble them. It is also hard to form local NGOs and volunteers. China has been promoting models of mutual support for the aged such as "time banking" and "Yizhuang" in recent years, but with little success. There are two main reasons for this: The number of people able to provide care services in rural areas has dropped sharply due to urbanization. In today's China, the young and middle-aged labor force tend to work in the city, and many old people are needed in the cities to bring up their grandchildren, known as "skip-generation raising". Consequently, those left behind in the rural areas are mainly old, weak, sick, or disabled, which is challenging to the integration of voluntary groups. If you are in good health, you go to work for others. If not, you stay at home to do farm work and cook. Everyone is too busy. Who is willing to help an old woman like me? The other reason is that rural older people with disabilities are often alienated as people without reciprocal benefits. The model of rural mutual support for the aged relies on the informal relationship maintained by the primary group, mostly based on personal favors. In essence, the Chinese "favor" has an egoistic motivation. Only when it is "good" or provides reciprocation for oneself and one's family can the relationship survive. Hence, there is a contradiction in this type of informal support. The poorer the individual and the higher the degree of disability , the less support they receive from relatives, friends, and neighbors. I have no friends. I have no money to buy anything for others. I will not go to others' homes and they will not come to mine. --- Low-Quality Household and Community Care Family care is the main form of care for the disabled elderly in rural areas, mostly through care from the spouse, supplemented by care from their children. The model of community-based service is far from being established. Such care service is characterized by low professionalism levels, being a heavy burden on the caregivers, and is difficult to sustain; thus, we call it "low-quality household and community care". Our findings revealed that family members are the major providers of the care service. The support of rural elderly people with disabilities mainly comes from the social network maintained by personal relationships such as spouses, offspring, and relatives and is characterized by informality and discontinuity. This can be divided into two main categories. The first type is care from the spouse. Rather than the intergenerationally reproduced care model of "raising children for old age", the current daily care work is mostly done by the spouse. In a family with two disabled people, the less disabled one takes care of the more seriously disabled. Physical limitations result in elderly caregivers only being able to provide simple care services, and often having difficulties in completing actions such as rubbing, bathing, or massage, negatively impacting the health of rural elderly people with disabilities. More importantly, elderly caregivers with low levels of education are often unable to obtain information on caregiving or to buy medicine independently, ultimately affecting the care results. In fact, we are both disabled. I'm physically handicapped, she is slightly intellectually disabled. She usually takes care of me. She can cook and do simple massage but can't wipe my body. I have to wait for my daughter to do it and she comes back only on Saturdays or Sundays. Ordinarily, my husband take care of me. But when I was in the hospital for a cerebral infection, it was my daughter who helped me. My husband and I can't read and didn't know what to do in the hospital. The second type of care is given by sons and daughters, who usually provide regular care once or twice a week, or once a month. This kind of care is often of short duration and low in frequency, and usually unable to complete periodic rehabilitation and other projects. Additionally, the interviews revealed that with the change in China's family planning policy, the sons and daughters have a greater task in raising their own children. It is becoming more difficult to balance the responsibility of supporting both the elderly and children. Evidence from the respondents' narratives uncovers that: People of my age have gone to the cities to babysit their grandchildren for their sons. Now my son and my daughter-in-law take care of their child by themselves and have a lot of pressure, so I try not to trouble him too often. Unfortunately, community-based care is far from being established in rural China. A rural village is a community, with the village committee office at its core. The model of community care service requires the community to be a platform to integrate various service resources and to provide services for the elderly, such as assisting with meals, cleaning and bathing, and medical treatment. However, village committees are highly dependent on state resources and have the primary task of responding to the requirements of those external institutions, with a work style based on routine and formalism. They regard community-based care as an administrative task assigned by the state and local government, with their main goal to complete the task "in form" without necessarily meeting the user's needs. As front-line service providers, they are often busy writing reports and submitting materials to prepare for inspection by superiors. As mentioned before, day care centers, dining halls for the elderly, and other services tend to nominally fulfill, rather than to truly serve, the needs of the disabled elderly in rural areas. No one in the village committee usually comes to see us, but they sometimes inform me to fill in a form. Last year, two old people in the village were burned to death. I am afraid. Before this, I heard the village committee say that a "bell for help" would be installed in the homes of the elderly, and then they said that it is not allowed by the leaders. If the bell had been installed, the two old people wouldn't have died. --- Discussion Through a survey of two rural areas in China, we found that elderly people with disabilities face multi-dimensional access barriers. This is not an exceptional case. With the rapid aging in China, most of the disabled elderly face these dilemmas. As we noted in the introduction, nearly 90% of disabled elderly people have care needs that are basically met by informal care services provided by family members. Therefore, it is necessary to establish a long-term nursing care system and other aging policies to meet the needs of the rural elderly with regard to those potential influencing factors [31]. Many other countries are also facing these obstacles, such as India, Lebanon, etc., but their specific manifestations may be slightly different [32,33]. To solve the barriers of accessibility in care services for rural elderly people with disabilities, we need to establish a diversified care service provision. This will rely not only on the leading role of the state and the active intervention of the public sector, but also on market mechanisms, NGOs, volunteers, communities, and households, so as to achieve Pareto optimality in the allocation of specific care services. From the outset, the state should actively undertake the main responsibility of developing the care service system. In addition to formulating relevant policies, the state should increase fiscal spending, expand the scope of supply, and provide more medical rehabilitation and health care services for disabled elderly people in rural areas to address the enormous gap between service supply and real demand. In evaluating the eligibility of service candidates, it is necessary to start from the demand side-to target and respond to rural elderly people with disabilities according to their individual needs. In aiming for better efficiency in the implementation of rural elderly care service policy, the realization of urban-rural equality is still the key method. In fact, rural areas have their own special environments, and the current pursuit of rural-urban consistency in policy leads to substantial inequality. To resolve this, the state and government should innovate the content, form, and supply model of welfare services, and design a unique care service system suitable for disabled elderly people and their families in rural areas. Additionally, strengthening the supervision of policy implementation cannot be ignored, as only by effective supervision can the policy work well. It is essential to give full play to the function of the market in providing high-quality, personalized, and diversified products in the care service supply process. Firstly, the government should encourage disabled elderly people in rural areas to buy care products, for example, through tax incentives and government subsidies, to expand the local consumption market. Secondly, care service companies could make full use of the rural left-behind labor force of women, establishing an occupational and professional local work team through recruitment and vocational training to solve the problem of service costs. Finally, in the process of accelerating market cultivation, the focus should be to change the social concept of the rural disabled elderly and their family members. We should, through media publicity and policy mobilization, move away from traditional ideas of "raising children for old age" and recognize the market as a natural force, to correctly understand care service products and market transaction behavior. Following on from this model in rural areas, it would be valuable to have guiding third-party forces such as NGOs and voluntary groups participate in the support network for the supply of care services. The government should enact a variety of legal policy incentives such as tax preferences, private management of public institutions, private institutions with public assistance, and government funding and financial subsidies to encourage urban NGOs and volunteers to get involved in rural services. Furthermore, in the wave of rural revitalization in present-day China, the state has created thousands of jobs in rural areas. We need to take advantage of this opportunity to encourage more young people to stay in rural areas as the human capital needed to vigorously foster and cultivate local NGOs and voluntary groups. It is crucial to transform individual consciousness into public consciousness. Traditional notions around small families also should be broken up, village members should be encouraged to help each other, and a supportive social environment created to assist the elderly and their family members, all of which are important prerequisites for the formation of localized voluntary groups. Lastly, it is necessary to consider changes in family structures, functions, and social environments to develop family vitality again. Community support networks should be established to help elderly spouse caregivers with the latest policy information and to improve their knowledge and skills. Regular respite care services can also relieve serious care pressure on elderly caregivers. In terms of care from offspring, the state should take the work-family balance as an important guiding principle. The methods of the flexible work system, family responsibility leave, or a compressed work cycle are profitable for employees to reduce the loss of opportunity cost and to mobilize the enthusiasm to provide parental care. In terms of community-based care, it is recommended that an interactive platform in the community be set up involving local government, social organizations, community, volunteers, family caregivers, and rural disabled elderly people for communication, resource integration, and service allocation. The state, NGOs, and the community should work together to build rural community centers in health services, day care, rehabilitation, family services, intelligent care services, and dining areas for the elderly. These centers would provide disabled elderly people and their families with health guidance, management of chronic disease, door-to-door rehabilitation, short-term care for the old, mental care, psychological counseling, and other comprehensive services, thus preventing accessibility barriers and unexpected deaths. In addition, we should strengthen the supervision and regulation of village committees. This research had three main limitations. As it was the first attempt to explore accessibility barriers in care services for rural elderly people with disabilities, it was difficult to obtain information from existing reviews. Secondly, as the research was based on a small-scale qualitative research method, the applicability of the conclusions to other regions and situations requires further discussion and reflection. Lastly, considering crosscultural challenges, the results and discussions we have put forward may not adapt well to practices in western countries. Therefore, additional quantitative and comparative research on care services for rural elderly people with disabilities should be conducted in the future. --- Conclusions This study was based on a welfare pluralism approach to provide a framework for outlining the accessibility barriers in care services for rural elderly people with disabilities in China and the reasons behind them. Based on the analysis of the interview results, the study's major finding revealed that rural elderly people with disabilities are facing severe multi-dimensional barriers. These arise from the unsuitable actions of multiple service providers, including the state, the market, NGOs, volunteers, households, and the community. Their actions and attitudes have a negative impact on the health of disabled elderly people in rural areas. Specifically, because of the state's austerity fiscal expenditure, there are issues such as insufficient resource investment, strict application requirements, uneven resource distribution, and irregular implementation for the provision of care services for rural elderly with disabilities. Under market principles, markets tend to search for more potential customers and reduce service costs. However, elderly with disabilities in rural areas are regarded as "useless" and "fragile" and only their sons and daughters can take care of them in traditional Chinese culture, resulting in an absence of formal services. Moreover, it is difficult for NGOs outside the rural areas to get involved, and the lack of local NGOs and volunteers is the main dilemma. Households are the main provider of care services, but family endowment is inconsistent and unsustainable, causing difficulties for the provision of high-quality care services. Within community-based care, the operation of "formalism" is in evidence, hindering both the accurate implementation of the new model and measures to respond to the needs of rural elderly with disabilities. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions of privacy. ---
This research covers a multi-dimensional investigation into accessibility barriers in care services for older people with disabilities in rural China. In-depth interviews with 13 rural disabled older people in China were conducted using qualitative methods. Based on a welfare pluralism approach, the results showed that in comparison with urban areas, care services for disabled older populations in rural areas are more subject to social barriers. This can be seen in the limited state (lack of resources, rigorous eligibility qualifications, uneven distribution, and irregular implementation); the absent market (low levels of consumption, high cost pressures, self-exclusion, and traditional cultural constraints); absent NGOs and volunteers (difficulties in access for NGOs and volunteers outside the area and formation difficulties of local NGOs and volunteers); as well as low-quality care in households and communities (unprofessional care from the spouse, unsustainable care from children, and unavailable community-based care). A multi-subject support network should be established to remove accessibility barriers to care services for older people with disabilities in rural areas through active intervention and interaction. The results of the research provide insights that will aid in the formulation of future social care service plans and health policies for rural older people with disabilities.
Introduction Experiences of early adversity, including growing up in a harsh family environment or experiencing child maltreatment, have been linked to adverse health outcomes later in life [1,2], including cardiovascular disease [3,4]. Although such exposures are generally considered to increase the likelihood of later health problems, there is some evidence to suggest that men and women may be differentially impacted by such experiences [5][6][7]. For example, greater cardiovascular reactivity to and slower recovery from acute psychosocial stressors has been reported among young and middle-aged women compared to men [8]. Moreover, women may be more vulnerable than men when faced with stressors that are interpersonal in nature or when witnessing people around them experience stressors [9]. Thus, growing up in harsher family environments may be more detrimental for women as this would involve not only experiencing interpersonal stressors oneself, but further bearing witness to conflict and negative interactions among other family members, such as between parents or between parents and siblings. In the long run, such differences in acute stress reactivity and recovery may contribute to different rates of disease, including increased risk for CVD [10]. Two recent reports highlight sex differences following exposure to early adversity, specifically retrospectively self-reported abuse during childhood. Chen et al. [11] found that abuse only predicted later all-cause mortality among women, not men, even though rates of exposure to abuse either did not vary by sex or were higher among men. In addition, the associations between childhood abuse and later all-cause mortality among women were not explained by demographic variables, including socioeconomic status, and personality and affective traits, such as depression. Similarly, Suglia et al. [12] found that hypertension was more prevalent among young women, but not men, who retrospectively reported having experienced childhood sexual abuse. Despite such clear sex differences, the biological and psychological pathways connecting early adversity to different mortality and morbidity outcomes among men and women are not clearly understood; moreover, it is unclear whether similar sex differences are found in response to less severe childhood adversity. Although some previous research has highlighted longitudinal associations between psychosocial variables during childhood [13][14][15], these studies frequently combine a range of diverse indicators of participants' early life environment, e.g., parent occupation, parent health behaviors, and the occurrence of specific stressful events . This makes it difficult to draw conclusions about the relative contributions of these different aspects of participants' childhood environments on future health-relevant outcomes. The present study expands on this research by focusing in greater depth on the psychosocial family environment, i.e. the overall family climate in particular. Compared to focusing on, e.g., specific, acute stressful life events that people may have experienced during childhood, this likely provides a clearer picture of the quality of everyday interactions in participants' childhood homes. Moreover, the effect of participant sex is typically adjusted for but not examined separately. Thus, the present study further expands on existing research by directly examining possible moderation by sex. This study aims to investigate the influence of exposure to moderate early adversity on some physiological indicators, including resting blood pressure and heart rate and salivary cortisol that may represent pathways through which exposure to early adversity can lead to morbidity and mortality later in life. Elevated resting BP and HR are known risk factors for CVD [16,17] and dysregulated cortisol production has been implicated in the pathogenesis of CVD, in part through accelerating atherosclerosis [18,19]. To this end, we investigate the influence of a harsher family climate during childhood on the above outcomes, hypothesizing that people exposed to more childhood adversity will have higher BP, HR and cortisol levels, on average. We further investigate the possible moderation of this association by sex. Relative to men, we expect that women may be more strongly impacted, experiencing higher BP, HR, and cortisol levels following exposure to childhood adversity compared to men. Finally, to assess whether possible effects of harsher family environments on adult physiological outcomes may be a function of affective states and/or stress exposure in adulthood, we include measures of participants' concurrent anxious and depressive affect, as well as perceived stress. --- Materials and methods --- --- Procedure Data were accessed via the Common Cold Project website . The larger project included the experimental examination of participants' susceptibility to viral challenge; the analyses reported here use only baseline data collected prior to experimental procedures. Participants provided written consent, completed demographic and psychosocial questionnaires and reported on their daily mood over a two-week period. Participants' BP, HR, height and weight were taken during a laboratory visit and participants collected saliva samples at home. The study was approved by the institutional review boards at Carnegie Mellon University and The University of Pittsburgh. --- Measures Childhood adversity. Participants completed the 13-item Risky Families Questionnaire [20] assessing how frequently they were exposed to a variety of adverse physical and psychological circumstances in their childhood homes when they were 5-15 years. On a 5-point Likert scale , participants noted how often they experienced certain distressing events and the extent to which care and affection were provided. Positively-framed items were reverse scored, with higher total RFQ scores indicative of more adversity. Scores ranged from 13-63 and internal consistency was strong . RFQ scores were comparable to those reported in previous reports [21,22]. Participants also completed the Family Environment Scale [23], a well-validated 25-item scale about family relationships and family system maintenance when they were 5-15 years. The three subscales making up the family relationships index were deemed relevant to the question at hand and included here. On a 5-point Likert scale , participants reported the extent to which they agreed with each statement. To align with the RFQ, positively-framed items were reverse scored so higher scores reflect more conflict and less cohesive and expressive family environments. Internal consistency was acceptable to very good for the overall family relationships index and across the conflict, cohesion and expressiveness dimensions , which is similar to previously reported values [23]. Physiological measures. During day 0 of quarantine, resting systolic and diastolic blood pressure and HR were assessed by study staff three times over the course of one day, i.e., at 8:00 AM, 1:00 PM, and 7:00 PM, using an automated, portable oscillometric blood pressure monitor and averaged across the three sessions. Height and weight were measured at two laboratory visits using a standard balance scale, BMI ) calculated, and averaged across the two visits. Participants collected saliva samples using Salivettes over 2 days, 7 samples per day across the day . Participants were advised not to eat or brush their teeth one hour before saliva collection, to abstain from smoking 30 minutes prior to collection, and were given a kitchen timer and either a preprogrammed wristwatch or handheld computer to increase compliance. --- Note � p < 0.05 when comparing men and women using independent sample T-tests and chi-squared tests; BMI = body mass index; RFQ = Risky Families Questionnaire; FES = Family Environment Scale; SBP = systolic blood pressure; DBP = diastolic blood pressure; HR = heart rate. The FES was reverse scored to be in line with the RFQ. Higher scores on the FES family relationship index and subscales are indicative of more conflict and less cohesion and expressiveness within the childhood home. One female participant who was taking corticosteroid medication was excluded from analyses for total cortisol. https://doi.org/10.1371/journal.pone.0225544.t001 Cortisol levels were determined by time-resolved fluorescence immunoassays with cortisolbiotin conjugates as tracers [24]. Intra-and inter-assay coefficients of variation were < 12%. Total cortisol was estimated by calculating area under the curve using the trapezoidal rule [25] for each day and averaging across the two days for each person. Only data from saliva samples taken within 45-90 min. following wakeup or within 60 min. of the prescribed time were included. Adult distress measures. Participants completed the 10-item Perceived Stress Scale [26] at a pre-quarantine in-person visit. On a 5-point Likert scale , they indicated how frequently they had experienced certain thoughts and feelings over the past month . Items were summed; higher scores reflect greater perceived stress. During daily, evening phone interviews over 2 weeks, participants reported how often they experienced each of 16 emotions that day on a 5-point Likert Scale . Mean depressive affect and mean anxious affect were calculated. --- Statistical analyses One participant reported corticosteroid medication use and was excluded from analyses for total cortisol output. Total cortisol output was not normally distributed and log transformed. All models adjusted for race/ethnicity , age, years of education and birth control use. Models regressing BP and HR on early adversity exposure additionally adjusted for BMI. Multiple linear regression models were fit to assess the main effects of the childhood family environment and interaction effects of childhood family environment X sex on physiological outcomes. All covariates and independent variables were first centered at zero and interaction terms were created by multiplying each centered childhood family environment variable by sex . Covariates and independent variables were entered in Step 1 and the interaction term was added in Step 2. When considering adult distress measures related to adult distress/dysphoria, we first examined the independent effect of each, then the effects of all 3 in saturated models. Statistical models were analyzed using SPSS version 24.0 . --- Results --- Main effects of adversity in childhood family environments and sex There was a significant, negative effect of FES conflict on SBP which remained significant when adding current depressive affect , but not when adding anxious affect and stress . Other aspects of the family environment were not associated with physiological outcomes . Moreover, women had lower SBP , DBP and total cortisol output , but higher HR than men. When including adult distress measures in the models, effects remained significant except when depressive affect was added to models for total cortisol output . --- Interaction effects between adversity in childhood family environments and sex Sex interacted with several measures of adversity in participants' childhood environments to predict adult physiological outcomes. Specifically, sex interacted with the RFQ and FES total as well as the FES conflict and FES cohesion subscales to predict SBP . Similarly, sex interacted with the FES total as well as the FES cohesion subscale to predict DBP . Marginal effects on DBP were additionally found for the interactions between sex and the RFQ and FES conflict subscale. Finally, participants' HR was only predicted by the sex X FES express interaction. Sex and early adversity did not interact to predict total salivary cortisol output. See Tables 2 and3 for a detailed overview of these results. Notably, none of these results changed substantively when adding adult distress measures, with the one exception that the sex X FES total interaction dropped to marginal significance in the saturated model for DBP. Next, separate models were fit for men and women to estimate simple slopes. Women who reported greater overall childhood adversity had lower SBP and lower DBP relative to women reporting less childhood adversity. Women who reported more conflictual and less cohesive childhood homes had lower SBP. Less cohesive childhood homes were marginally associated with lower DBP among women . Finally, women who reported less expressive childhood homes had lower HR . Results were largely unchanged when adding adult distress measures, with the exception of FES total and FES express no longer being associated with women's DBP and HR, respectively . Associations among men were all in the opposite direction but none reached significance . Additional, post-hoc sensitivity analyses adjusted for employment status, marital status and smoking status. This did not substantively alter results with the exception that the interaction between sex and FES expressiveness on HR dropped to marginal significance . --- Discussion This study adds to the literature suggesting that childhood adversity is differentially associated with physiological outcomes among men and women later in adulthood. We found that, among women only, greater childhood adversity was associated with lower levels of baseline BP . Importantly, these results persisted even after taking into account a wide range of concurrent psychosocial risk factors and relevant demographic covariates . When considering particular aspects of the childhood home environment, more conflictual and less cohesive environments were more clearly associated with BP in women than was expressiveness within the home. Given that early adversity is associated with greater disease risk, including CVD, among women, the directionality of the current findings is counterintuitive and counter to our predictions. However, these associations were generally consistent across both SBP and DBP and across different measures of childhood adversity. Our findings are also in line with a recent study reporting a negative association between the number of endorsed adverse childhood events and resting SBP in the laboratory among a generally healthy, all-female sample [27]. Similarly, Su et al. [28], reported higher baseline DBP among male and female adolescents and young adults who had reported experiencing 2+ ACEs; moderation of these findings by sex was not investigated. Our current findings suggest that among a nonclinical sample of generally healthy women, greater reported exposure to moderate severity childhood adversity may be associated with the opposite pattern of basal cardiovascular hypoactivation, perhaps suggesting a down-regulated sympathetic nervous system response among women exposed to childhood adversity although the extent to which the SNS is involved in longer-term BP regulation is still unclear [29]. Alternatively, our findings may suggest a survival advantage among women who, despite experiencing early adversity, were deemed to be generally healthy and thus may represent a particularly high-functioning subgroup of individuals. Strengths of this study include the thorough assessment of resting BP , multiple indicators of the childhood home environment, and a generally healthy, non-clinical sample of adult men and women. Nonetheless, participants' retrospective self-reports of their family home environment represents a limitation. Similarly, sex was self-reported; future research should tease apart the relative contributions of biological sex and socialized gender roles on the early adversity to BP connection. Lastly, this study was cross-sectional and the sample largely White. Thus, future research should investigate these associations in more diverse samples and examine potential mediating pathways, including individuals' health behaviors and coping strategies, and broader psychosocial circumstances. Similarly, the fact that the sample was recruited via newspaper advertisements represents a potential limitation. Given participant demographic characteristics, in particular average years of education and rates of unemployment, it appears unlikely that this method of recruitment biased the sample towards more educated or wealthier individuals. Nonetheless, it is possible that focusing on recruiting via newspaper advertisements influenced the makeup of the sample in other ways. Finally, future research should investigate the possibility of three-way interaction effects between age, sex, and early adversity to inform our understanding of possible survivor effects if certain BP patterns are only apparent among older individuals, e.g., among older women exposed to greater early life adversity. This study suggests that reporting exposure to moderate severity childhood adversity is associated with lower resting BP among healthy women, but not men, with potential implications for their long-term health. This is counter to some findings from clinical samples and further highlights the importance of separately considering effects of the early family environment on physiological health outcomes among men and women. Writing -review & editing: Hannah M. C. Schreier, Emily J. Jones, Sibel Nayman, Joshua M. Smyth. ---
It is unclear how adverse childhood family environments differentially impact adult health outcomes among men and women. This brief communication reports on the independent and joint effects of adverse childhood family environments and sex on indicators of health in adulthood.213 18-55-year olds reported on their childhood family environment (Risky Families Questionnaire (RFQ); Family Environment Scale (FES total )) and their current perceived stress and depressive and anxious affect. Resting systolic (SBP) and diastolic blood pressure (DBP), and heart rate (HR) were taken during a laboratory visit, and total cortisol output was measured in saliva samples collected at home. Exposure to childhood adversity did not vary by sex. Women had lower SBP, DBP, and total cortisol output, but higher HR, than men (ps < .05). Sex moderated the association between childhood family environment and SBP
Fighting for Social Lives: Public versus Market Pedagogies Occasionally when I teach, interactions with my students leave me off-kilter and searching for critical pedagogical theory to address, transform, expand or withstand the tension points coming to a head in the classroom. There is a common question arising in these unnerving exchanges: is the purpose of higher education to enable students to get a job, or is it for enhancing social good? While social justice and finding a career are not mutually exclusive, my experience is that neoliberal business logics in higher education have the capacity to overshadow critical and social sensibilities and actions. The challenge in this situation is how to appropriately address students' concerns about the social realities, and perhaps problems, within careerfocussed courses or programmes. What happens if critical and social perspectives are routinely eschewed over rote curricula and educational practices? What impact will this have on students and the industries they are seeking careers in? In an effort to reconcile industrial, critical and social rhetorics in higher education, this writing is an example of my reflective process that links teaching experience in a Canadian public university to critical educational theory. During a seminar I was teaching about the economic and social landscape of the North American entertainment industry, one of Hollywood's most powerful movie executives, Harvey Weinstein, was charged with several counts of sexual assault and other egregious offences . During this semester, the students came to class every week with many fervent observations, questions and reflections about the Weinstein charges and other industry abuses being exposed. 1 Revelations included systemic and persistent abuses of power, workplace harassment, overtly and covertly racist practices, and rampant sexual assault, and this left many students questioning their choice of pursuing an education and a career in the entertainment industries. Our classroom conversations subsequently intensified as public discourse and other allegations became frequent, and particularly focused on abuses based on race and gender. In light of the media revelations over several weeks, one student's comment stands out: I'm Iranian, and my family didn't want me to study film because they don't think it's possible for someone like me to succeed here in film. They think the industry is too racist to accept me. This student was a relatively new citizen, and was clearly feeling the weight of racism in public discourse, and perhaps also in practice in my film production program. In response to this statement, I tried my best to facilitate a discussion about systemic racism, about industry, community and individual strategies for addressing and resisting racism, and about employee rights and protections in the industry and in law, but I kept thinking that the discussions fell short of where there needed to go. This sense of feeling like my students needed better support and educational framing made me consider my pedagogical approaches. Elaine Unterhalter draws on Melanie Walker's "capability approach" to describe pedagogy as "an ethically informed process in which we are alert to questions of equitability, a humane justice, and what 1 Weinstein was convicted on some of the charges in March 2020 and sentenced to twenty-three years in U.S. prison. . --- Simon Fraser University Educational Review Vol. 14 No. 1 Summer 2021 / sfuedreview.org we want students to be and become" . My sense of disquiet echoes Unterhalter's words, and can be understood as a pedagogical drive to enable social justice through teaching and learning. However, in thinking back, I didn't find a way for this student to see themself reflected in media production education and beyond, so Walker's capability approach was likely never realized. Did I falter in the face of the neoliberal pressure to quiet the critical mind and fall in line with the rote technical aspects of media education? What other work would I need to do to realize more socially just pedagogical commitments, and what other learning did I need to do in order to better address these important issues? The same semester, but in a different seminar on business writing and industry preparedness for media artists, my class talked about industry abuses of power and about swiftly changing workplace policies and standards to address harmful workplace practices and cultures. We considered a selection of objectifying images from Hollywood films, and talked about why images can be objectifying, and potential impacts of this objectification. I asked the students, if our industry creates representations that objectify and degrade certain bodies, is it possible that our images are connected to the ongoing workplace violence experienced by many industry workers? What happened next stunned me, partly because the class was engaged in an in-depth discussion. One student stood up and yelled at me: How dare you disrespect our industry in this way. If the industry knows that sex sells, or wants to use images that make them money, how dare you question that. I was taken aback at this student's position, as it had naively not occurred to me that being critical about the entertainment industry presented such a threat to them in the context of our class. In the conversations that followed, the student was not interested in critiquing the entertainment industry because the student was in the media program as a means to get a good job. I responded by saying that employees can have opinions, and the more informed and considered their opinions, the better. We looked more closely at changing employment policies to show that issues of workplace abuses were being confronted at a high level. Again, though, I was left with a feeling that I hadn't responded to this fully or satisfactorily, and I certainly did not find a way to address the economic threat that fueled the student's comments. Certainly, in teaching vocational content critically, I am placing students in a bind because students do need to find employment that can pay their bills, and possibly even bring them a sense of joy and accomplishment. Furthermore, push-back is to be expected when students have received messaging about the purely functionalist vocational value of their educational programmes. The educational challenge, then, is how to situate critical and social content so that students can find ways to apply it within workplace contexts. The work must be to build bridges between industrial contexts and critical and social curricula; these bridges most certainly have to be about confronting, speaking to, and strategizing remedies to social inequities that manifest in media workplace culture . What comes into question while considering bridging social and critical curricula and pedagogy to vocational content, are the broad economic and social pressures faced by the public post-secondary education landscape, and the resulting discourses taken up by these institutions. --- Simon Fraser University Educational Review Vol. 14 No. 1 Summer 2021 / sfuedreview.org While my university is a public institution, the newer technological and career-centred programs are priced as cost-recoverable, and students are paying tuition amounts that are getting closer and closer to pricey private institutions. In fact, one of our university administrators recently delivered a self-proclaimed "state of the union" where they launched a new operational plan, and explained that our former way of institutional planning was "no way to build a business." In this address, clear departmental actions were noted, but they were not linked to our core institutional values of access and student well-being. The disparities between institutional visions and institutional actions are well documented in academic literature, particularly with respect to the ways equity initiatives are often managed in performative ways and have limited impact . Dua and Bhanji note, in particular, that equity initiatives are easily absorbed by neoliberal commercial rhetorics wherein it is important for institutional competitiveness to speak a language of equity and diversity without operationalizing these sentiments. The tensions in these institutional critiques play out in a variety of ways. Stromquist and Monkman warn that: The privatization of public education…contributes to the depoliticization of the university as students in private universities are readily inculcated by 'careerist' as opposed to 'critical' norms…the privatization of higher education puts it squarely in the productive sphere and weakens the principle of education as a public good… If students and instructors sense that the purpose of their education is simply for obtaining a job, then the critical and social functions of higher education might be lost. In my experience, there is a rift or tension between colleagues who are concerned with bridging critically-minded and social justice approaches to our vocational programs, and those who are focused on teaching pure technologies and rote job skillsets. In trying to find a way through these tensions, different critical pedagogy theories offer generative notions for prioritizing the public in public educational institutions. Two critical concepts that I have encountered describe the tensions that I experience in academic life: public and market pedagogies. Giroux defines public pedagogy in neoliberal terms "that privileges the entrepreneurial subject" and "attempts to undermine all forms of solidarity capable of challenging market-driven values and social relations…" Giroux's concept of public pedagogy aligns with Stromquit and Monkman's warning that careerist logics have the potential to erase topics and perspectives of social good from educational contexts. The vocational logic they warn against includes emphasis on productivity, prioritization of rote workplace skills, and the reduction of socially critical course and program offerings in the social sciences and humanities . Together, these theorists criticize market logics for infiltrating the public sphere. Paulo Freire's sense of public pedagogy differs immensely by theorizing how teachers can embed notions of deep democracy in education. Freire's work entails a deep faith in the capacity of all people, particularly people who are typically cast aside or considered invaluable. For Freire, education is a non-hierarchical dialogical act "between learners and educators as equally knowing subjects" . In other words, formal education offers the chance for teachers and students to learn together in a community that values different perspectives and rich dialogue. From a Freirean perspective, public pedagogy has the capacity to democratize the production and exchange of knowledge, thus having the potential to challenge or transform systems of power. Freire's work has been built upon by critical scholars such as bell hooks who advocates for responsive and democratic methods of teaching critical thinking, and who also extends her scholarship to media culture because of the potent public pedagogy of popular entertainment on dominant ideas about gender, race and class . Media scholars have taken up hooks' work in critical media public pedagogy and developed critical media educational courses that seek student agency through the exploration of resistant interpretations and production of popular entertainment . What this work evidences with respect to public education is a need to vigilantly pay attention to mechanisms of democracy, ethics and social justice that might become overshadowed by industrial and marketplaces demands and logics. These tensions between notions of public and market invoke the image of fighting for social lives in the face of policies, communications, and pedagogies that undermine critical thinking and inculcate rote commercial mindsets. The following section explores educational theory that supports social pedagogical concerns. Theories of resistance to neoliberal and capitalistic, market-driven forms of education have the potential to support different approaches to critical pedagogy. Pinar's concept of the reconceptualist educator draws from Michael Apple's work theorizing social justice in education. A reconceptualist approaches pedagogy through critical theory, historical perspectives, and away from technical or corporate function towards "a fundamental reconceptualization of what curriculum is, how it functions, and how it might function in emancipatory ways" . Pinar's reconceptualist approach is important as it encourages educators to centre social good and teach critical theory with concern for history and ethics over marketplace concerns. Apple notes that it is essential for educators to develop a clear purpose and prioritization of social values underlying their teaching practices. Since market-based neoliberal values are embedded in our educational systems, and "Since these values now work through us, often unconsciously, the issue is not how to stand above the choice. Rather, it is in what values," we, "must ultimately choose" . So, the site of resistance for Apple is in the centring of values for social good as a way to offset or destabilize the rationality of neoliberal educational mechanisms. Freire and hooks look more closely at the character and core values of educators, and suggest that in order to check our own superiority in the classroom, we must engage the students' knowledge, and remove hierarchies and one-way directionality of learning. For Freire , the educator should be "a person constantly readjusting his knowledge, who calls forth knowledge from his students" . Freire and hooks' democratic approach to education is a critical pedagogy that ultimately promotes a teaching practice that values instructor self-reflection and openness to change through the prioritization of students' knowledge critique and production from their social locations and areas of social concern. This democratic approach to curriculum and pedagogy extends to decolonizing or postcolonial approaches. Pedagogical strategies include reviewing --- Simon Fraser University Educational Review Vol. 14 No. 1 Summer 2021 / sfuedreview.org structures of colonial nomination in history and social practice, working with the difficult knowledges inherent to colonial histories, appropriately integrating local histories and knowledges, particularly Indigenous histories and knowledges, and ultimately using these strategies for the goal of enhancing students' sense of relationality and self-determination . Reading this kind of theoretical work offers me energy, insight and focus on centring social justice and critical social theory in my teaching, and in how to respond to highly individualist and instrumental neoliberal pressures within my institution. Despite the ways that critical pedagogy critically informs my teaching approaches and philosophy, a question remains in my mind: how is it possible to build the stamina to fight for social lives in higher education? Rogowska-Stangret urges academics to "win the gag reflex back and to learn from the bodily impulses and instincts in order to form a visceral politics," and to do so while considering "the potentials of collectivity" . These visceral politics remind me to centre the feelings associated with the pangs of identifying and addressing injustices in curricular or institutional structures, and to constantly remind myself of the ways that collective action influences change. The various ways that oppressive aspects of dominant culture evaluate, reward or regulate instructors in inequitable ways, is also well documented in academic literature . For speaking up, and working on initiatives that promote student or instructor equity, I have certainly experienced harassment, even in one instance being accused loudly of having a "gay agenda" after routinely following up with a department head regarding the budget for a queer-themed course with significant student demand. In this instance, my supposed transgression was simply requesting information on an established course, but one that inherently challenged heteronormative dominance in my department. I am not alone in these interactions, so collective action is an important path forward for instructors to build community and support each other in transforming the institution through just measures, and at times, in supporting more just outcomes through instructor hiring and evaluation processes themselves . While instructors have a relatively high degree of agency and autonomy by contract, there are persistent tensions within the culture and working processes of the university that seek conformity, accountancy, and consistency with the ways things have always been done. Instances of institutional push-back and harassment are also a reminder to retain the visceral and student-centred sensibility of resistance. While I've never used the metaphor of war or fighting to reflect on my teaching career, I have had an instinct lately to use my privilege as a full time instructor to perform a blockade between the students and crushing hegemonic institutional structures violent and oppressive curricula or pedagogical practices. Harmful institutional practices that are well-documented include a dearth of culturally-appropriate counselling and student services, or curriculum and pedagogy that focuses solely on oppressive histories, hierarchies or cultural representations and assumptions . This oppression might be even greater in vocational or career-based university departments, as the social problems of those industries embed themselves in the curriculum and pedagogy of these programs. Scholars such as Allen , Ashton , Lee , andSaha , chart the ways that race and gender, in particular, are sites of exclusion in both the media industries and media vocational education; these authors explore strategies for resistant pedagogies, curricula, and institutional practices for the betterment of student experience, agency, and for the future impact on cultural industry work. In my own institution, there are instructors who are following these lines of resistance and transformation, and some that simply want to teach technological processes or old canons in uncritical ways. These differences form sites of tension, and contribute to my visceral sense of a blockade. Through this feeling, I find resolve for embedding histories of injustice and resistance into my curricula, thinking creatively about how students can complete courses using knowledge from their lived experiences, and finding ways to allow students to complete work outside of institutional timelines. All of this work underscores my commitment to contributing to a healthier and more just educational experience and landscape. By doing this, I hope to engage what Freire referred to as the "utopian state of denunciation and annunciation" , which I see as a form of emancipatory and empowered personal expression. Denunciation and annunciation are a utopian vision of education that sees the learner coming to a place of dismantling socio-political systems that affect them, and communicating or envisioning their way through or beyond these systems. In practice, students have commented that my courses that sit amongst other more instrumental curricula or hegemonic canons have offered them an oasis that has allowed them to explore their own connections to curricula, and see beyond normative cultural paradigms. This is particularly the case in my queer cinema history course, wherein the students explore a plethora or films and theories that emphasize non-dominant cultural expressions of gender, sexuality, race and nation. So, my impulse as an educator to perform a blockade is ultimately a phenomenological mechanism that gives space for students' epistemological reckoning, or just some time so that they can reach their goals. In the classroom stories I shared that left me off-kilter, my blockade failed in these moments as they needed to be responded to with more sustained action and focus, particularly in how these student concerns connect to the students' future workplaces in their desired careers in the entertainment industries. The critical pedagogies that I explored in this writing will assist me in staying focused on centring students' experiences and knowledges in their learning, ongoing reflection and strategizing on critical issues of injustice, and engaging critical theory as a necessary pedagogical intervention to centre social care and social justice in higher education. If I can make space for critical pedagogy in vocational or applied learning environments, I hope that will assist in transforming both the institution and the industries that our programs link to. --- Simon Fraser University Educational Review Vol. 14 No. 1 Summer 2021 / sfuedreview.org
Vocationalism and other market logics dominate over social logics in contemporary higher education discourses. This personal critical reflection explores how theories of critical pedagogy can inform and inspire educators to centre social justice.